Category Archives: CS-343

Architecture

https://martinfowler.com/architecture/

This article on software architecture from Martin Fowler asks the questions “What is architecture?” and “Why does architecture matter?”. The answers are justified through the article which begins with why he does not always like using the term architecture but he accepts it because he believes a good architecture supports programming evolution. Fowler believes that architecture should be defined as “Architecture is about the important stuff. Whatever that is”. Fowler explains that the sole definition of what architecture is has been a debate among computer scientists for a long time but that the definition he puts in this article is the result of a conversation between himself and another professional named Ralph Johnson.

Although Fowler’s definition is his own and very simple it makes sense just like many other computer scientists “definitions” of software architecture. Clearly architecture can be defined in many different ways according to who you ask for a definition but Fowler also states that architecture is important because bad architecture will affect a system in different ways including being harder to modify, having more bugs when updated and these updates/new features being released slower which impacts the customer for which the system was designed. 

Fowler also discusses two different types of architecture which are application and enterprise architecture. Application architecture is based on an “application” which in Fowler’s words is a body of code perceived as a single unit by developers. Enterprise architecture is based on larger amounts of code which work independently from one another in different systems but all used by one company or “enterprise”.

I chose this article as it takes more of a thoughtful approach to the concept of software architecture and in part helps the reader open their understanding of the subject to different “definitions” of the concept which can help when developing software architecture in the future. I feel as though this article would be helpful to anyone in the programming field as it gives you multiple perceptions of architecture, what it is, and why it is important.

This article taught me the general idea of both application and enterprise architecture. I was able to see what Fowler and Johnson viewed architecture as and why they viewed it that way when other computer scientists may define or view architecture in a different light. I learned that architecture itself is very complicated and cannot be defined under one singular definition as it fits many different definitions for many different people in computer science.

From the blog CS@Worcester – Dylan Brown Computer Science by dylanbrowncs and used with permission of the author. All other rights reserved by the author.

REST API

Hello all who are reading this, on this post I’ll be discussing a blog that I found in the link just below this paragraph. This particular blog post written by Douglas Minnaar gives a great overview of REST API’s and the knowledge necessary to build them in a structured manner. The blog begins defining the REST acronym, who defined and coined the term, and why one might use the REST style.

https://dev.to/drminnaar/rest-api-guide-14n2

This first part covering the general fundamentals provided a clear and concise picture of how one should structure a REST web structure. In the first part it covers specifically the six architectural constraints of REST. One of the more important constraints I thought was the concept of a Uniform Interface and the ‘principle of generality’ which describes avoiding making a complex interface when simplicity would be far more advantageous across multiple clients.

Another constraint covered was the Layered System. In the blog it is described as more of a restraint for the client, that they should be decoupled in a way that allows the server to work without the client assuming what the server is going to do. It allows the server to pass it through stages like security and other tiers without the client checking back or communicating with it to verify as to not disrupt the process or possibly break security.

Part two shows an HTTP API to go further into the constraints described in part one and how to define the contracts for resources. I felt this part went by a little bit too fast without accurately describing more about what a contract is or what your resources should be but it did show some great points on their naming conventions and how best to rank them, these being techniques I’ll want to apply later should I work further with a REST API. The other sections of part two includes status codes, which I already had learned about in class, and content negotiation. Content Negotiation was very short and only really described as to pay attention to primarily json or xml and throw a 406 code otherwise.

The third part actually gave an example project based off his guide and the Richardson Maturity Model, which in part one is described as a leveled list into how ‘mature’ or as how I read it, how well designed a model is based off how many URI’s a service has and if it implements multiple HTTP methods or status codes. The architecture of the project uses the Onion architecture which I found interesting and almost immediately understood only because of how an onion is structured. The “Ranker” project is mostly an application of the REST architecture and by circumstance is also a movie ranker? It mostly allows you to manage users, movies and ratings but the core of this project is to demonstrate REST and the Richardson Maturity Model and its methodology.

 I felt like this particular blog post gave me some new concepts to think of when working on a REST API as well as some general formatting properties.

From the blog CS@Worcester – A Boolean Not An Or by Julion DeVincentis and used with permission of the author. All other rights reserved by the author.

Container Orchestration

Christian Shadis               

Docker has been the focus of my Software Construction, Design, & Architecture class for the past couple weeks. In a software development environment, it is paramount that applications can run on different operating systems and in different runtime environments. When applications rely on several dependencies, it becomes cumbersome to install all needed libraries on every machine the application runs on. This is where Docker comes in handy, allowing the programmer to create a self-sufficient container holding all parts of the application, along with all of its dependencies. Anybody can now run an application using its container without having to install any dependencies on their local computer.

Applications, however, are often designed as microservices, each microservice in its own container. A software company may have tens, hundreds, or thousands of different containers that need to be deployed and monitored at once. It is plain to see how this becomes a scaling issue. Container orchestration emerged to address the scalability of containerizing applications. Container orchestrators, like Kubernetes or Docker Swarm, allow the automation of repetitive work related to the deployment and maintenance of containers, such as configuration, scheduling, provisioning, deployment, resource allocation, scale of containers, load balancing, monitoring, and facilitating secure interactions between containers. This is why I chose to read about container orchestration from the article “What is Container Orchestration” from Red Hat.

The article goes into detail on what container orchestration is used for and why it is necessary, along with listing its major functions. It also describes how container orchestration programs like Kubernetes are configured, along with a basic overview of the anatomy and behavior of a Kubernetes cluster.

Using Docker to containerize applications is a pivotal skill for developers to have. In a world where so much computing is moving toward cloud technology, however, it is also important to be able to use Docker Swarm or Kubernetes because a large portion of applications a developer will work on will be deployed on the cloud in some way. In those situations, traditional Docker knowledge will be of little use. Instead, the developer should be able to leverage a Kubernetes cluster or a Docker Swarm to work with large containerized cloud-based applications.

Before reading this article and writing this entry, I had no exposure to container orchestration, though I had wondered about the scalability of the small docker containers we have been working with in class. I learned the basics of the subject, and accrued a list of further references to read more about applying container orchestration in an enterprise setting.

https://www.redhat.com/en/topics/containers/what-is-container-orchestration

From the blog CS@Worcester – Christian Shadis' Blog by ctshadis and used with permission of the author. All other rights reserved by the author.

Benefits of using REST API

Application Program Interfaces (API’s) are a set of queries/commands that a user is allowed to use, and are necessary when using microservice architecture. API’s can be considered the messengers between microservices, since they transfer data between services. REST API’s are API that conform to the REST model, which use JavaScript Object Notation (JSON) to transfer data. JSON is a human-readable data format and is used because it is easy to understand. In a blog post “What is an API? A Digestible Definition with API Examples for Ecommerce Owners” Matt Wyatt explains what an API is and why you would want to use it.

API’s use endpoints, which are URL’s that execute a certain method in the API. There are 4 default methods that are standard practice: GET, PUT, POST, and DELETE. GET will retrieve data from the API, PUT will update an existing entry in the API, POST will create a new entry in the API, and DELETE will delete an entry in the API. These methods are just the standard methods, and many others may be created by the developer of the API.

API’s are used in most applications available today. Whenever you do anything online, you are most likely using an API. API’s are so ubiquitous that it would be more difficult to find an application which does not use them. Using Facebook as an example, you would use a login API, a search API, a feed API, friend request API, and possibly many more which are not obvious to the user.

Benefits of using such API’s include increased security, faster response time, and the ability to scale certain features when needed. API’s allow applications to restrict what data they allow in and out, which greatly increases security. This control is paramount when used in an application which contains sensitive information, such as personal details or banking information.

Personally, I believe API’s to be an excellent solution to a wide range of problems. They can easily allow access to data and restrict it at the same time. I see similarities between API’s and Java classes; good Java classes have access methods (getters/setters) which control how the class is used, their attributes are mostly private, and their implementation is kept a secret from the user. A good Java class can be used by someone who has no knowledge of how it works behind the scenes, and the same is true for API’s.

In conclusion, I think API’s are a useful tool that I most certainly will be using in the future. Regardless of my future job, frontend or backend, I will need to either create API’s or use them. They are so universal that avoiding them is almost impossible, although I wouldn’t want to avoid them. They offer a simple solution to a very complex problem, and provide extra benefits along with it.

From the blog CS@Worcester – Ryan Blog by rtrembley and used with permission of the author. All other rights reserved by the author.

An Insight on Deep Class Design

Last week we took a look at API Design and discussed the importance of keeping an API as simple as possible. (Learn more about last week’s post here: https://georgechyoghlycs343.wordpress.com/2021/10/28/api-creation-why-simple-is-better/)

This week we will take a look at how this simplicity should be applied not only to APIs but to all programs regardless of purpose or language. As programs become more and more complex it only gets harder to properly keep track of every facet of said program. Having to remember every method and class separately can quickly become a grueling task that eats away at the time, productivity and morale of a software developer.

Within this video the speaker, a professor John Ousterhout, gives their take on software design and what kind of mindset to take when designing your software. One of the major topics of this seminar is the idea of ‘Deep Classes’ which returns to the basic idea of abstraction. Ousterhout bring into focus the issues of ‘Shallow Classes’ which are small classes or methods that provide very little functionality. An example of this is shown below:

private void addNullValueForAttribute(String attribute){
data.put(attribute, null);
}

As Ousterhout states, this is a method that requires a full; understanding of the functionality and does very little to hide information (https://youtu.be/bmSAYlu0NcY?t=918). This method is essentially adding complexity with no benefit which is a net loss in the world of software. With this example in mind, Ousterhout states the biggest mistake people make in software design, “too many, too small, too shallow classes”. They attribute this to what many have been told throughout their career which is to keep methods small. This is problematic because it can actually increase complexity of a program as every class and method adds small amounts of functionality. 

This is especially true in things like the Java Class Library which has many small classes and methods with small functionality. To read a file Java requires you to create three separate classes to give the file input, buffering, and object input. In contrast Ousterhout brings up that UNIX has all of this wrapped into one object that being the UNIX file system which takes care of all other processes such as disk space management in the background.

So why does this matter in the end? The main point to get across is that abstraction is just so important in modern software development. UNIX abstracted its file systems which allows developers to spend little time worrying about file I/O implementation to allow for greater systems to be built. If something is used as often as File I/O, then it is worth it to create an all encompassing class/method for it. As long as classes are well organized there is no reason they cannot be large and have a lot of functionality.

From the blog CS@Worcester – George Chyoghly CS-343 by gchyoghly and used with permission of the author. All other rights reserved by the author.

Microservices Architecture In-Depth

Microservices architecture is a newer type of architecture that has become popular within recent times. This architecture differs from common architectures like monolith in that microservices “is an approach to developing a single application as a suite of small services, each running their own process and communicating with lightweight mechanisms.” To better understand the microservices architecture, it can be useful to compare it to the classic monolith architecture. In the monolith, there is one single unit, one central server, if you will, that does everything. One of the biggest problems with this design is that if anything needs to be done with the server, like maintenance or something goes wrong and the server goes down, the entire system gets taken down as a result of being one big unit. Microservices fixes this by having many small units/servers running almost independently of each other, so while some may rely on others for some information, if one part goes down for whatever reason, it does not take the entire system down with it.

A good example of how a microservices operates in comparison to monolith is say, for example, we have two main servers in our system, one for the admins, another for the standard users. This way the admins have their server and database, while the users have their server and database. This way, if maintenance has to be done on the admin server, whether it is updating database schema or whatever it may be, this can be done without affecting the standard users, as their server and database are separate from the admin’s server. This is nice as it can limit downtime for the users, so maintenance can be done on the user’s server only when absolutely necessary to limit outages.

Microservices architecture also puts a big focus on failure detection, and trying to automatically restore the service if it goes down. These services are built in such a way that allows early error/failure detection, so that fixes can be put out as fast as possible. Microservices are also built with upgradeability and scalability in mind, so that as a service grows and requires more and more processing power, it is easy to add more servers to add the extra power needed. With monolith, for example, this requires getting a whole new stronger server, rather than just adding another server, since the monolith is ‘all-in-one’, it does not allow the easy upgrading that microservices allows.

Source

I chose the above source from martinfowler.com because we had some readings from that website earlier in the course, so since they had an article on microservices I though what better site to use than them. They also provided a lot of good information on the topic with clear and concise explanations of the topics.

From the blog CS@Worcester – Erockwood Blog by erockwood and used with permission of the author. All other rights reserved by the author.

What is Docker?

Summary:

This article goes over what docker is how it has it become so mainstream today. They generally go over what containers are, the components of docker, docker’s advantages, and even it’s drawbacks. Docker became popular due to the fact it makes it easy to move code for an application and all of its dependencies from the developer’s laptop to a server. After reading this article, we should have a general understanding of what exists within docker and how it works.

Reason:

The reason I chose this article was because we use docker very often in class and most of use had never heard of it coming into to computer science. Why suddenly was it something that seemed so instrumental in most of our assignments? As we continued to use docker, the more and more it appeared to use how versatile it was and the power of its capabilities.

What I learned:

Docker is a software platform for building applications based on containers, it uses small and lightweight execution environments that make shared use of the operating system kernel but otherwise run-in isolation from one another. Containers are self-contained units of software you can deliver from a server over there to a server over there, from your laptop to EC2 to a bare-metal giant server, and it will run in the same way because it is isolated at the process level and has its own file system. A docker file is a text file that that provides a set of instructions to run an image. A docker image is a portable read-only executable file containing instruction for creating a container. Docker run utility is the command that launches a container. Docker hub is a repository where images can be stored, shared, and managed. Docker Engine is the server technology that creates and runs the containers. Docker compose is a command-line tool that uses YAML files to define and run multi container Docker applications. It allows you to create, start, stop, and rebuild all the services from your configuration and view the status and log output of all running services. The advantages are that docker containers are that it’s minimalistic and enables portability, they enable composability, and they help ease orchestration and scaling. The disadvantages however are containers are not virtual machines, they don’t provide bare-metal speed, and they are stateless and immutable. Today container usage continues to grow as cloud-native development techniques become the mainstream model for building and running software, but Docker is now only a part of that puzzle.

Source: https://www.infoworld.com/article/3204171/what-is-docker-the-spark-for-the-container-revolution.html

From the blog CS@Worcester – Life as a CS Student by Dylan Nguyen and used with permission of the author. All other rights reserved by the author.

API

API stands for application programming interface, which is a set of definitions and protocols for building and integrating application software.

How do APIs work?

The Application Programming Interface (API) is a software interface that allows two apps to communicate with one another without the need for a human to intervene. A programming interface (API) is a set of software capabilities and operations. A software code that can be accessed or executed is referred to as an API. API stands for application programming interface, and it is a code that allows two different software programs to communicate and exchange data with one another.

It allows goods or services to talk with one another without requiring knowledge of how they are deployed.

Consider the following example to better understand how the API works:

Let’s look at how API works with a basic example from everyday life. Assume you’ve gone to a restaurant for lunch or dinner. The server approaches you and hands you a menu card, which you can modify by specifying that you want a veggie sandwich without onion.

The waiter will take your order once some time has passed. However, it is not as straightforward as it appears, since there is a process that occurs in the middle.

Because you will not go to the kitchen to pick up your order or tell the cooking crew what you want, the waiter will play a vital role in this situation.

API also does the same by taking your request, and just like the waiter tells the system what you want and gives a response back to you.

Why would we need an API?

Here are a few reasons to use API:

  • The abbreviation for Application Programming Interface is API. API allows two separate software applications to communicate and exchange data.
  • It makes it easier to incorporate content from any website or application.
  • App components can be accessed using APIs. Services and information are delivered in a more flexible manner.
  • The generated content can be automatically published.
  • It enables a consumer or a business to personalize the material and services they utilize the most.
  • APIs assist in anticipating changes in software that must be made over time.

To sum up, the major reason APIs are so important in today’s marketplaces is because they enable speedier innovation. Change barriers are removed, and more people can contribute to the success of an organization. They have two advantages: they allow the company to generate better products while also distinguishing itself from the competitors.

From the blog CS@Worcester – Site Title by proctech21 and used with permission of the author. All other rights reserved by the author.

JSON

I’ve been seeing so many json files while working with Docker and cant help myself but wonder what is JSON? What do they do and why do we need them along with JavaScript. In this blog, I want to cover this topic to help myself and others to learn more about JSON.

JSON stands for JavaScript Object Notation, and is a way to store information in an organized, easy to access manner. Basically, JSON gives human-readable collection of data that can be accessed in logical manner. There are many ways to store JSON data but Array and nest objects are the most popular ones. However, I will not go into the details about those two methods but focusing more on JSON definition.

Why does JSON matter?

JSON becomes more and more important for sites to be able to load data quickly and seamlessly, or in the background without delaying page rendering. Also, it helps switching up the contents of a certain element within our layouts without refreshing webpages. This is convenience not only for users but also developers in general. Because of its popularity, many big sites rely on content provided by sites such as Twitter, Flickr, and others. These sites provide RSS feeds to minimize the effort to import and use the server side, but by using them with AJAX (a powered sites), we run into a problem that we would only be able to load an RSS feed if we’re requesting it from the same domain it’s hosted on. JSON allows us to overcome the cross-domain issue because using callback function in JSON would send the JSON data back to our domain. This capability makes JSON so useful as it solves so many problem that were difficult to work around.

JSON structure

JSON is a string whose format very much resembles JavaScript object literal format. We can include the same basic data types inside JSON as we can in a standard JavaScript object such as strings, numbers, arrays, booleans, and other object literals. This allows us to construct a data hierarchy. JSON is also purely a string with specified data format which means it contains only properties and it doesn’t have methods.

REST vs SOAP: The JSON connection

Originally, this kind of data was transferred in XML format using a protocol called SOAP. However XML was robust and difficult to manage in JavaScript. The reason is JavaScript already have objects, which are a way to express data within this language therefore Doughlas Crockford (JSON creator, also JSLint and JSMin) took a subset of that expression as a specification for new data interchange format and renamed it JSON.

REST began to overtake SOAP in transferring data. The biggest advantages of programming using REST APIs is that we can use multiple data formats which means you can include not only XML but also HTML and JSON. Since developers prefer JSON over XML so they come to favor REST over SOAP.

Today, JSON is the standard for exchanging data between web and mobile clients and back-end services. As I go deeper into the software development cycle, I feel like the need for JSON is essential. Let aside all advantages, there are disadvantages but the importance of JSON is undebatable.

Source: https://www.infoworld.com/article/3222851/what-is-json-a-better-format-for-data-exchange.html

From the blog CS@Worcester – Nin by hpnguyen27 and used with permission of the author. All other rights reserved by the author.

REST APIs

I chose to write about REST APIs this weeek, because we are covering them in class. And because REST APIs are ubiquiutous in Software Development these days. Application program interface or APIs are a a way of sending data between services over the internet. APIs have become popular because they allow companies to use already available tools like email automation, username and password validation, and payment processing to other established services. This allows developers to focus on the unique business demands of their customers and products.

In web development the client is the service or person making requests to access the data within the application. Most often, the browser is the client. A resource is a piece of information that the API can provide for the client. A resource needs to have a unique indentifier. A server receives and fufills client requests. REST stands for Representational State Transfer. When the client requests a resource from the API it comes in the reource’s current state, in a standardized representation.

At my job we have one workflow problem that could be swolved with an API. We have a client side web application where customers place orders, and an internal desktop application where we keep track of primers needed for those orders. The internal desktop application is called the primer log. The client application is a web application, but the internal application is not. This causes our company all kinds of problems. My coworkers will often say “I wish these services could talk to eachother”. And that is exactly what an API does. Our internal application is keeping track of which primers we have, where they are, and which are dry. But the customers cannot see that. Customers will frequently request that we use primers that we do not have onsite.

We will eventually recreate our primer log as a web application, which would allow our client web application and our primer log to communicate. So when a customer requests we use a primer, that would be a client request for a resource from the API. The API would check our internal primer log, and send a response back to the client indicating whether or not the requested primer is onsite. This API response could prompt the client to order the required primer if we did not have it onsite. That is ROI, not to mention the time it would save my team, and the turn around time for our customers. This is just an example of how an API can solve a real life business problem.

From the blog CS@Worcester – Jim Spisto by jspisto and used with permission of the author. All other rights reserved by the author.