API

This week we talked a lot about APIs and to my surprise I thought APIs did much more than they actually do.  Another peculiarity is that the only sort of programmatic process they support are the ones provided by HTTP calls, which again is also something I never had to deal with outside of cluelessly browsing the web.  Nevertheless, they are very important and incredibly intricate in the form of versatility offered to a variety of operations.  This is all very new to me, and this blog may show not only useful things I’ve learned but also crude misconceptions I may have formed by mistake, in the ladder case please disregard or give me a heads up and I’ll make corrections.

HTTP calls are GET, HEAD, POST, PUT, DELETE, CONNECT, OPTIONS, TRACE and PATCH my knowledge in the subject prevents me from saying that this is all of it or if there are more, use this site to refer to them https://developer.mozilla.org/en-US/docs/Web/HTTP/Methods it has comprehensive descriptions.  From the most recent assignment we had we utilized GET, POST, PUT, and DELETE.  I would like to summarize their function for my own benefit.

  • The GET request is used to retrieve data from the server, and it can only receive it may never change the data in the server
  • The PUT method is used to send and change or create data in the server it is different from post because it is idempotent, (big word) please refer to the URL https://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html#sec9.1.2
  • The POST method is similar to PUT but it is not idempotent, I think the simpler way to put it, oh sorry, post it is that POST creates a new instance every time it is used and PUT would replace it, this article shows good old English we can understand explanation about it https://reqbin.com/Article/HttpPost.
  • The DELETE request will do as it says, it deletes the requested data from the server and unlike GET and HEAD, which we are not mentioning, it may change the state of the server.  A cool thing I get from it is that some servers may reject a DELETE request which makes sense, deleting data should be a restricted action.

Something else I’ve learned from this assignment has to do with the YAML file which I looked up in order to know where it comes from and what it stands for.  There are other things I wanted to talk about related to APIs, but too many to fit here, so I’ll end with YAML.  YAML is a superset of JSON which is a java like way of representing and structuring data.  YAML can do anything JSON does and some more.  JSON is formatted using braces and brackets while YAML uses colon and two space indentation.  In my opinion I think conventions like this such as in python makes for messy nesting which leads to code hard to visualize.  YAML is like JSON, language-independent making it a good tool to use.  https://circleci.com/blog/what-is-yaml-a-beginner-s-guide/   

From the blog CS@Worcester – technology blog by jeffersonbourguignoncoutinho and used with permission of the author. All other rights reserved by the author.

Server Side and Node.js

The internet could not exist without servers to handle the exchange of data between devices. Given the importance of servers, the software and systems that run on them are equally as important. The programming of these applications is called Server-side development and is a large computer science field.

Most server-side applications are designed to interact with client-side applications and handle the transfer of data. A common form of server-side development is for supporting web pages and web applications. An emerging popular platform for web application backends is Node.js, a server-side language based on JavaScript.

A benefit to Node.js being based on JavaScript means that the same language can be used on the front end and the back end. Because JavaScript can handle JSON data natively, handling data on the server-side becomes much easier compared to some other languages. As the name suggests, JavaScript is a scripting language so code for Node.js also does not need to be compiled prior to running. Node.js uses the V8 engine developed by Google to efficiently convert JavaScript into bytecode at runtime. This feature can speed up development time for running smaller files frequently, especially during testing.

Node.js also comes bundled with a command-line utility called Node Package Manager. Abbreviated npm, it manages open-source libraries for Node.js projects and easily installs them into the project directory. The npm package repository is comparable to the Maven repository for Java. However, according to modulecounts.com, npm is over three times larger than Maven with nearly 1.8 million packages compared to less than 500 thousand. Each Node.js project has a package.json file where settings for the project are defined such as running scripts, version number, required npm packages, and author information.

The majority of Node.js applications share common packages that have become standard frameworks throughout the community. An example of this is Express.js, which is a backend web framework that handles API routing and data transfer in Node.js.

At the core of node.js is the event loop which is responsible for checking for the next operation to be done. This allows for code to be executed out of order without waiting for an unrelated operation to finish before the next. The default asynchronous ability of node.js is ideal for webservers where many users are continuously requiring different tasks to be done with differing speeds of execution. When people visit different API routes, they expect the server to respond as quickly as possible and not be hung up on a previous request.

I have used Node.js before but now that we have begun to use it in class, I wanted to learn more about its benefits in server-side development. For me, the most important takeaway is the need to take advantage of Node.js’s nonblocking ability when developing a program. Doing so will improve the speed of the application and increase usability.


Source: https://www.educative.io/blog/what-is-nodejs, http://www.modulecounts.com

From the blog CS@Worcester – Jared's Development Blog by Jared Moore and used with permission of the author. All other rights reserved by the author.

CS-343 Post #4

I wanted to read more about REST APIs after working on them with the past couple activities and having some questions about them due to some mistakes I was making. I was getting a better understanding of the concept as the activities continued on, but there are some things I wanted to clear up based on what I was wrong about or had trouble understanding at first.

From the activities, I know that APIs are used for applications and build them with code. A REST API is a version of an API that uses REST standards and constraints, which seem to work more with HTTP methods. When working on the activity, I kept confusing some of the methods and mistaking the API methods with the source paths. I was also a little confused at first about the request body after activity 12, but then after activity 13, I had a better understanding and it was basically the area of section in the request where you can enter the information for the method. For example, you can write the name and ID of items for creating an item, or you could enter a new name for an existing item given its ID.

I wanted to look more into what REST is about, and found a good article by RedHat called “What is a REST API?”, that goes over REST APIs and has a section just talking about REST. It is described as a set of constraints, not protocols or standards, and it is talked about how HTTP methods are used with it. It also talks about the multiple formats that HTTP can use with REST APIs, such as JSON, HTML, and PHP. The headers are considered very important because they contain identification data about the requests made, such as the URL and metadata.

Some of the criteria for an API to be considered “RESTful” is to have a client-server architecture, stateless communication, cacheable data, uniform interface for components, a layered system for organization, and optional code on demand. The uniform interface is also said to require that the requested resources can be identifiable and worked with by the client, have self descriptive messages, and available hypermedia. While there is a sizable set of criteria for an API to meet to be RESTful, it is efficient to use because it can lead to the API being faster and better to manage with the methods.

I have a better understanding of what makes an API qualified to be RESTful, and I see where I was making my mistakes in the activities.

https://www.redhat.com/en/topics/api/what-is-a-rest-api

From the blog Jeffery Neal's Blog by jneal44 and used with permission of the author. All other rights reserved by the author.

Code Smells

A smell of code has to do with being a superficial indicator which in most cases has corresponded to a much bigger and deeper problem in the system. This term was first coined by Kent Beck. This character became famous even after the appearance he had with Martin Fowler’s book. Code winds are very subjective winds and which differ only on the basis of language, developers but also methodology that have other factors.

What are the some frequently seen smells?

Bloaters: In its entirety this includes code, but also classes and methods. This smell has become great over time, making the accumulation of functionality but also the creep of features it has.

For example: it is about long methods, for classes of gods but also for long lists of parameters.

Dispensable: This smell refers to code that is otherwise known as dead code, which can not be called or executed. These are unnecessary blocks of code, as these codes do not offer benefits, but make it possible to increase technical debt.

Psrsh: It is a duplicate code, it deals with the refactoring of objects, as well as makes the generalization premature.

Connections: This smell means the moment when a code must be independent and end up bound together due to lack of access control, as well as excessive delegation.

Ex: We have code tracking, as well as the use of private and internal members.

*Developers in most cases are trained to see what are the logical errors that have been accidentally inserted into their code. Errors of this type will range from the most forgotten cases of different edges, which are not treated to the point where there are logical errors which make possible the crash that can have systems from most. Code winds are some signals that our code needs to be refactored in order to improve the alignment, support but also readability.

The presence of code scents is a very serious topic, despite the names being perhaps a bit ridiculous. Anyone who has little experience in software development is aware that code winds also have the property of seriously slowing down software release.

With the use of code wind detection tools, but also making it possible to submit codes in short as well as controlled refactoring sessions, we have the opportunity to go beyond the initial impact of code winds. In this way we have the right to discover the deepest problem that lies within the software. Code winds can in many cases be vital to make it possible to discover when to refactor and what refactoring techniques to use.

There is a part that is very essential in the way the software is developed as well as to find the flavors of the code. Another role is to dig into the root causes but also to fix them through what is called refactoring.

Most common codes are:

  • Bloaters
  • Object-Orientation Abusers

  • Change Preventers

  • Dispensables

  • Couplers

The ones that are found the most by details are :

Long method
Duplicate Code
Inheritance method
Data Clumps
Middle Man
Primitive types
Divergent Code
Shotgun Surgery
Feature EnvyPrimitive Obsession
Lazy Class
Type Embedded in Name
Uncommunicative Name
Dead Code

References:

https://levelup.gitconnected.com/10-programming-code-smells-that-affect-your-codebase-e66104e0341d

https://www.infoq.com/news/2016/09/refactoring-code-smells/

https://8thlight.com/blog/georgina-mcfadyen/2017/01/19/common-code-smells.html

From the blog CS@worcester – Xhulja's Blogs by xmurati and used with permission of the author. All other rights reserved by the author.

Architecture

https://martinfowler.com/architecture/

This article on software architecture from Martin Fowler asks the questions “What is architecture?” and “Why does architecture matter?”. The answers are justified through the article which begins with why he does not always like using the term architecture but he accepts it because he believes a good architecture supports programming evolution. Fowler believes that architecture should be defined as “Architecture is about the important stuff. Whatever that is”. Fowler explains that the sole definition of what architecture is has been a debate among computer scientists for a long time but that the definition he puts in this article is the result of a conversation between himself and another professional named Ralph Johnson.

Although Fowler’s definition is his own and very simple it makes sense just like many other computer scientists “definitions” of software architecture. Clearly architecture can be defined in many different ways according to who you ask for a definition but Fowler also states that architecture is important because bad architecture will affect a system in different ways including being harder to modify, having more bugs when updated and these updates/new features being released slower which impacts the customer for which the system was designed. 

Fowler also discusses two different types of architecture which are application and enterprise architecture. Application architecture is based on an “application” which in Fowler’s words is a body of code perceived as a single unit by developers. Enterprise architecture is based on larger amounts of code which work independently from one another in different systems but all used by one company or “enterprise”.

I chose this article as it takes more of a thoughtful approach to the concept of software architecture and in part helps the reader open their understanding of the subject to different “definitions” of the concept which can help when developing software architecture in the future. I feel as though this article would be helpful to anyone in the programming field as it gives you multiple perceptions of architecture, what it is, and why it is important.

This article taught me the general idea of both application and enterprise architecture. I was able to see what Fowler and Johnson viewed architecture as and why they viewed it that way when other computer scientists may define or view architecture in a different light. I learned that architecture itself is very complicated and cannot be defined under one singular definition as it fits many different definitions for many different people in computer science.

From the blog CS@Worcester – Dylan Brown Computer Science by dylanbrowncs and used with permission of the author. All other rights reserved by the author.

REST API

Hello all who are reading this, on this post I’ll be discussing a blog that I found in the link just below this paragraph. This particular blog post written by Douglas Minnaar gives a great overview of REST API’s and the knowledge necessary to build them in a structured manner. The blog begins defining the REST acronym, who defined and coined the term, and why one might use the REST style.

https://dev.to/drminnaar/rest-api-guide-14n2

This first part covering the general fundamentals provided a clear and concise picture of how one should structure a REST web structure. In the first part it covers specifically the six architectural constraints of REST. One of the more important constraints I thought was the concept of a Uniform Interface and the ‘principle of generality’ which describes avoiding making a complex interface when simplicity would be far more advantageous across multiple clients.

Another constraint covered was the Layered System. In the blog it is described as more of a restraint for the client, that they should be decoupled in a way that allows the server to work without the client assuming what the server is going to do. It allows the server to pass it through stages like security and other tiers without the client checking back or communicating with it to verify as to not disrupt the process or possibly break security.

Part two shows an HTTP API to go further into the constraints described in part one and how to define the contracts for resources. I felt this part went by a little bit too fast without accurately describing more about what a contract is or what your resources should be but it did show some great points on their naming conventions and how best to rank them, these being techniques I’ll want to apply later should I work further with a REST API. The other sections of part two includes status codes, which I already had learned about in class, and content negotiation. Content Negotiation was very short and only really described as to pay attention to primarily json or xml and throw a 406 code otherwise.

The third part actually gave an example project based off his guide and the Richardson Maturity Model, which in part one is described as a leveled list into how ‘mature’ or as how I read it, how well designed a model is based off how many URI’s a service has and if it implements multiple HTTP methods or status codes. The architecture of the project uses the Onion architecture which I found interesting and almost immediately understood only because of how an onion is structured. The “Ranker” project is mostly an application of the REST architecture and by circumstance is also a movie ranker? It mostly allows you to manage users, movies and ratings but the core of this project is to demonstrate REST and the Richardson Maturity Model and its methodology.

 I felt like this particular blog post gave me some new concepts to think of when working on a REST API as well as some general formatting properties.

From the blog CS@Worcester – A Boolean Not An Or by Julion DeVincentis and used with permission of the author. All other rights reserved by the author.

Container Orchestration

Christian Shadis               

Docker has been the focus of my Software Construction, Design, & Architecture class for the past couple weeks. In a software development environment, it is paramount that applications can run on different operating systems and in different runtime environments. When applications rely on several dependencies, it becomes cumbersome to install all needed libraries on every machine the application runs on. This is where Docker comes in handy, allowing the programmer to create a self-sufficient container holding all parts of the application, along with all of its dependencies. Anybody can now run an application using its container without having to install any dependencies on their local computer.

Applications, however, are often designed as microservices, each microservice in its own container. A software company may have tens, hundreds, or thousands of different containers that need to be deployed and monitored at once. It is plain to see how this becomes a scaling issue. Container orchestration emerged to address the scalability of containerizing applications. Container orchestrators, like Kubernetes or Docker Swarm, allow the automation of repetitive work related to the deployment and maintenance of containers, such as configuration, scheduling, provisioning, deployment, resource allocation, scale of containers, load balancing, monitoring, and facilitating secure interactions between containers. This is why I chose to read about container orchestration from the article “What is Container Orchestration” from Red Hat.

The article goes into detail on what container orchestration is used for and why it is necessary, along with listing its major functions. It also describes how container orchestration programs like Kubernetes are configured, along with a basic overview of the anatomy and behavior of a Kubernetes cluster.

Using Docker to containerize applications is a pivotal skill for developers to have. In a world where so much computing is moving toward cloud technology, however, it is also important to be able to use Docker Swarm or Kubernetes because a large portion of applications a developer will work on will be deployed on the cloud in some way. In those situations, traditional Docker knowledge will be of little use. Instead, the developer should be able to leverage a Kubernetes cluster or a Docker Swarm to work with large containerized cloud-based applications.

Before reading this article and writing this entry, I had no exposure to container orchestration, though I had wondered about the scalability of the small docker containers we have been working with in class. I learned the basics of the subject, and accrued a list of further references to read more about applying container orchestration in an enterprise setting.

https://www.redhat.com/en/topics/containers/what-is-container-orchestration

From the blog CS@Worcester – Christian Shadis' Blog by ctshadis and used with permission of the author. All other rights reserved by the author.

Benefits of using REST API

Application Program Interfaces (API’s) are a set of queries/commands that a user is allowed to use, and are necessary when using microservice architecture. API’s can be considered the messengers between microservices, since they transfer data between services. REST API’s are API that conform to the REST model, which use JavaScript Object Notation (JSON) to transfer data. JSON is a human-readable data format and is used because it is easy to understand. In a blog post “What is an API? A Digestible Definition with API Examples for Ecommerce Owners” Matt Wyatt explains what an API is and why you would want to use it.

API’s use endpoints, which are URL’s that execute a certain method in the API. There are 4 default methods that are standard practice: GET, PUT, POST, and DELETE. GET will retrieve data from the API, PUT will update an existing entry in the API, POST will create a new entry in the API, and DELETE will delete an entry in the API. These methods are just the standard methods, and many others may be created by the developer of the API.

API’s are used in most applications available today. Whenever you do anything online, you are most likely using an API. API’s are so ubiquitous that it would be more difficult to find an application which does not use them. Using Facebook as an example, you would use a login API, a search API, a feed API, friend request API, and possibly many more which are not obvious to the user.

Benefits of using such API’s include increased security, faster response time, and the ability to scale certain features when needed. API’s allow applications to restrict what data they allow in and out, which greatly increases security. This control is paramount when used in an application which contains sensitive information, such as personal details or banking information.

Personally, I believe API’s to be an excellent solution to a wide range of problems. They can easily allow access to data and restrict it at the same time. I see similarities between API’s and Java classes; good Java classes have access methods (getters/setters) which control how the class is used, their attributes are mostly private, and their implementation is kept a secret from the user. A good Java class can be used by someone who has no knowledge of how it works behind the scenes, and the same is true for API’s.

In conclusion, I think API’s are a useful tool that I most certainly will be using in the future. Regardless of my future job, frontend or backend, I will need to either create API’s or use them. They are so universal that avoiding them is almost impossible, although I wouldn’t want to avoid them. They offer a simple solution to a very complex problem, and provide extra benefits along with it.

From the blog CS@Worcester – Ryan Blog by rtrembley and used with permission of the author. All other rights reserved by the author.

An Insight on Deep Class Design

Last week we took a look at API Design and discussed the importance of keeping an API as simple as possible. (Learn more about last week’s post here: https://georgechyoghlycs343.wordpress.com/2021/10/28/api-creation-why-simple-is-better/)

This week we will take a look at how this simplicity should be applied not only to APIs but to all programs regardless of purpose or language. As programs become more and more complex it only gets harder to properly keep track of every facet of said program. Having to remember every method and class separately can quickly become a grueling task that eats away at the time, productivity and morale of a software developer.

Within this video the speaker, a professor John Ousterhout, gives their take on software design and what kind of mindset to take when designing your software. One of the major topics of this seminar is the idea of ‘Deep Classes’ which returns to the basic idea of abstraction. Ousterhout bring into focus the issues of ‘Shallow Classes’ which are small classes or methods that provide very little functionality. An example of this is shown below:

private void addNullValueForAttribute(String attribute){
data.put(attribute, null);
}

As Ousterhout states, this is a method that requires a full; understanding of the functionality and does very little to hide information (https://youtu.be/bmSAYlu0NcY?t=918). This method is essentially adding complexity with no benefit which is a net loss in the world of software. With this example in mind, Ousterhout states the biggest mistake people make in software design, “too many, too small, too shallow classes”. They attribute this to what many have been told throughout their career which is to keep methods small. This is problematic because it can actually increase complexity of a program as every class and method adds small amounts of functionality. 

This is especially true in things like the Java Class Library which has many small classes and methods with small functionality. To read a file Java requires you to create three separate classes to give the file input, buffering, and object input. In contrast Ousterhout brings up that UNIX has all of this wrapped into one object that being the UNIX file system which takes care of all other processes such as disk space management in the background.

So why does this matter in the end? The main point to get across is that abstraction is just so important in modern software development. UNIX abstracted its file systems which allows developers to spend little time worrying about file I/O implementation to allow for greater systems to be built. If something is used as often as File I/O, then it is worth it to create an all encompassing class/method for it. As long as classes are well organized there is no reason they cannot be large and have a lot of functionality.

From the blog CS@Worcester – George Chyoghly CS-343 by gchyoghly and used with permission of the author. All other rights reserved by the author.

Microservices Architecture In-Depth

Microservices architecture is a newer type of architecture that has become popular within recent times. This architecture differs from common architectures like monolith in that microservices “is an approach to developing a single application as a suite of small services, each running their own process and communicating with lightweight mechanisms.” To better understand the microservices architecture, it can be useful to compare it to the classic monolith architecture. In the monolith, there is one single unit, one central server, if you will, that does everything. One of the biggest problems with this design is that if anything needs to be done with the server, like maintenance or something goes wrong and the server goes down, the entire system gets taken down as a result of being one big unit. Microservices fixes this by having many small units/servers running almost independently of each other, so while some may rely on others for some information, if one part goes down for whatever reason, it does not take the entire system down with it.

A good example of how a microservices operates in comparison to monolith is say, for example, we have two main servers in our system, one for the admins, another for the standard users. This way the admins have their server and database, while the users have their server and database. This way, if maintenance has to be done on the admin server, whether it is updating database schema or whatever it may be, this can be done without affecting the standard users, as their server and database are separate from the admin’s server. This is nice as it can limit downtime for the users, so maintenance can be done on the user’s server only when absolutely necessary to limit outages.

Microservices architecture also puts a big focus on failure detection, and trying to automatically restore the service if it goes down. These services are built in such a way that allows early error/failure detection, so that fixes can be put out as fast as possible. Microservices are also built with upgradeability and scalability in mind, so that as a service grows and requires more and more processing power, it is easy to add more servers to add the extra power needed. With monolith, for example, this requires getting a whole new stronger server, rather than just adding another server, since the monolith is ‘all-in-one’, it does not allow the easy upgrading that microservices allows.

Source

I chose the above source from martinfowler.com because we had some readings from that website earlier in the course, so since they had an article on microservices I though what better site to use than them. They also provided a lot of good information on the topic with clear and concise explanations of the topics.

From the blog CS@Worcester – Erockwood Blog by erockwood and used with permission of the author. All other rights reserved by the author.