Category Archives: CS-343

week-9

Hello, week-9. I want to post a blog to quickly review the API topic to learn more about REST calls. I got confused; I researched about it. It has the Understanding And Using REST APIs.

 

What is a REST API

 

API (Application Programming Interface) – A set of rules allows programs to support any other. The developer creates the API on the server and enables the client to speak to it. 

The REST (Representational State Transfer) determines how the API. It is a set of rules that developers follow when they create their API. One of the rules states that one should get data (called a resource) when linking to a specific URL. Each URL made a request, while the data sent back to is called a response.

The Anatomy Of A Request #

It’s important to know that a request with four points:

  • The endpoint
  • The method
  • The headers
  • The data (or body)

 

The endpoint – URL that requests for (root-endpoint/?). The root-endpoint is the starting point of the API that is ordering.

The path determines the resource request. For example, it is like an automatic answering machine. That asks to press 1 for service, press 2 for another service, 3 for yet another service, and so on.

The Method

The method is the type of request sent to the server:

  • GET – Request to get a resource from a server. It will perform a `GET` request; the server looks for the requested data and sends it back.
  • POST – Request to create a new resource on a server. It performs a `POST` request, the server creates a new entry in the database and tells whether the creation is successful.
  • PUT & PATCH – Requests to update a resource on a server. If performing a `PUT` or `PATCH` request, the server updates an entry in the database and tells whether the update is successful.
  • DELETE – Request to delete a resource from a server. If performing a `DELETE` request, the server deletes an entry in the database and tells whether the deletion is successful.

These methods provide meaning for the request made. Perform steps: Create, Read, Update and Delete (CRUD).

The HEAD: it used to provide information to both the client and server. It has many purposes, such as authentication and giving information about the body content. It can find a list of valid headers on MDN’s HTTP Headers Reference.

The Data – contains information sent to the server. It only used POST, PUT, PATCH, or DELETE requests.

From the blog Andrew Lam’s little blog by and used with permission of the author. All other rights reserved by the author.

Docker

This week I researched more about docker to know what it exactly is and what are the applications of it. I had never heard or used docker before I joined this class and since we use docker for almost every class I was just curious to learn more about it and from the things learned during the class, it looks like docker is one of the useful software and very versatile in the field of software development.

Docker is a set of platforms as a service product that uses OS-level virtualization that delivers different software in packages and they are called containers or Docker can also be referred to as an open-source platform for building, deploying, and managing containerized applications. Since docker is open-source, it enables developers to package their application into containers, these containers simplify the delivery of distributed applications and has become popular as different organizations shift to could-native development and hybrid multi-cloud environments. A docker file contains simple textile that starts with every docker container and contains instructions for how to build a Docker container image, that file automates the process of Docker image creation. Docker image contains executable application source code as well as all the necessary libraries, tools the application code needs to run the container properly. Docker images are made up of layers, and each of those layers corresponds to the version of the image. Docker containers are the live, runner instances of the Docker images and this helps users to interact with them, thus one can adjust their setting according to their preferences. Docker hub is a public repository where Docker images are stored and can be used by all Docker Hub users. The advantages of using Docker are they are cost-effective with fast Deployment, Able to run anywhere, Flexibility, and so on, the disadvantages are its advances quickly thus but lack documentation that makes some developers hunt for information which then wastes time for those developers, some developers find switching to docker is a quite a steep learning curve thus making it hard to understand for some people.

I choose this article because it explains Docker and its tools in the simplest way which helped me understand more about docker since it’s the first time I have ever used this application. It explains the advantages of using docker and this article provided clear information on different aspects of docker such as Docker File, Docker images, Docker containers. And since its one of the most used application throughout the software developers it will be important for me too in the future as a software developer.

Article: https://www.ibm.com/cloud/learn/docker

From the blog CS@Worcester – Mausam Mishra's Blog by mousammishra21 and used with permission of the author. All other rights reserved by the author.

What is GRASP?

GRASP is short for General Responsibility Assignment Software Patterns. GRASP is a design pattern in object-oriented software development. It’s a tool for software developers that provides a way to solve organizational problems. Also, it offers a common way when talking about abstract concepts. This design pattern sets responsibilities for objects and classes in object-oriented program design

     In GRASP (General Responsibility Assignment Software Patterns) when working with object-oriented programming, it classifies problems and the solutions together into a pattern. Thus making them well defined where they can be applied in other similar instances. Grasp has nine different patterns for classes and objects that helps make it clear to show the responsibilities. The nine patterns are:

– Controller: Assigns the responsibility of dealing with system events.
– Creator: Most common in object-oriented system, which class is responsible for creating objects.
– High Cohesion: Evaluative pattern that attempts to keep objects focused, manageable and understandable.
– Indirection: Pattern that supports low coupling and reuses potential between two elements.
– Information Expert: The most basic principle – if we do not have the data we need, we would not be able to meet the requirement and assign responsibility.
– Low Coupling: A measure of how strong one element is connected to, has knowledge of, or relies on another element.
– Polymorphism: Responsible for defining the variation of behaviors based on the type is assigned to.
– Protected Variations: A pattern that protects elements from the variations on other elements by wrapping the focus with an interface and using polymorphism to create various implementations.
– Pure Fabrication: A class that does not represent a concept in the problem domain.

     I chose to write about GRASP (General Responsibility Assignment Software Patterns) because it is a part of our curriculum which we will be learning in this class. Since I have written about DRY (Don’t Repeat Yourself) and YAGNI (You Ain’t Gonna Need It), it made sense to continue researching about patterns and learning more about it. As I have mentioned many times before, I plan on becoming a full-stacked developer. Learning about these different types of patterns and what each of them do will help me as a developer become more knowledgeable and more efficient when it comes to coding. 

A good blog I found when researching this topic that I suggest to read and learn more about GRASP is:
http://www.kamilgrzybek.com/design/grasp-explained/.
This Blog is very useful and has many examples of the nine patterns I mentioned.

From the blog CS@Worcester – Michael's Developer Blog by michaelchaau and used with permission of the author. All other rights reserved by the author.

Software Architecture

I was curious to learn more about software architecture and, based on the article “Software Intelligence for Digital Leaders” I understood what software architecture is. Much of what we do on a daily basis, from using a cell phone to clocking into sending an email depends on the software architecture of the systems that we use. Without software architecture, so much of what we know and use would not be possible, but what is it?

Software architecture is what makes it possible for innovation within an organization. It’s simply the organization of a system and this organization includes all components, how they interact with each other, the environment where they operate, and the principles used to design the software.

Software architecture is a blueprint for both the system and the project. It defines the work assignments that must be carried out by design and implementation teams. The architecture is the primary carrier of system qualities such as scalability, performance, modifiability, security, and cost reduction, none of which can be achieved without a unifying architectural vision.

Some benefits of Software Architecture

Software architecture is extremely important for a software project. Let’s see some benefits of software architecture that will tell us more about how it can help us in our project and why we should invest in good software architecture.

Identifies areas for potential cost savings. An architecture helps an organization to analyze its current IT and identify areas where changes could lead to cost savings.

It creates a solid foundation for the software project

Makes your platform scalable

Increases performance of the platform

Reduces costs, avoids codes duplicity

Implementing a vision. Looking at the architecture is an effective way to view the overall state of IT and to develop a vision of where the organization needs to or wants to go with its IT structure.

Better code maintainability. It is easier to maintain existing software, as the structure of the code is visible and known, so it’s easier to find bugs and anomalies.

Enables quicker changes in IT Systems. There is increased demand for systems to change quickly to meet rapidly evolving business needs, legislative requirements, etc.

Increases quality of the platform

Helps manage complexity

Makes the platform faster.

Higher adaptability. New technical features, such a different front ends, or adding a business rule engine is easier to achieve, as your software architecture creates a clear separation of concerns.

It helps in risk management. Helps to reduce risks and chances of failure.

Reduces its time to market, reduces development time.

Prioritize conflicting goals. It facilitates communication with stakeholders, contributing to a system that better fulfills their needs.

I talked about software architecture because as a computer science major interested in software, learning its importance is really vital and necessary. Software is important for creating projects and maintaining them, which is a huge responsibility.

To sum up, software architecture dictates technical standards, including software coding standings, tools, and platforms. It gives the right technical solutions to ensure your success.

What Is Software Architecture – Examples, Tools, & Design | CAST (castsoftware.com)

15 benefits of software architecture you should know (apiumhub.com)

From the blog CS@Worcester – Gracia's Blog (Computer Science Major) by gkitenge and used with permission of the author. All other rights reserved by the author.

REST API Security

Christian Shadis

For the past couple weeks, my Software Construction, Architecture, and Design course has been focusing on the anatomy of simple REST APIs. While we were learning about how to create endpoints to retrieve data from the backend, it was as simple as just extracting the data from the database on the backend. This seemed too basic to me. Though I was sure there was still plenty of unexplored detail in the backend regarding security protocols, I was curious to see how security measures can be implemented in a REST API like the ones we have been working with.

Chris Wood discussed Security Scheme Objects and their role in the REST API, listing supported security mechanisms (Basic Authentication, API Key, JWT Bearer, and OAuth2.0), in his 2019 article REST API Security Design. He began by defining what a REST API Security Scheme Object is: a Component object (like Schemas or Responses) which “[describes] the security requirements for a given operation.” There is not a specific object for each of the mechanisms listed above, but rather a single Security Scheme Object that can represent any of the four mechanisms. The Object is defined in the top-level index.yaml file under the Components section, the desired mechanism is applied, and any additional arguments specific to the mechanism are passed. Once it is defined, the Security Scheme Object can be applied to individual endpoints or operations. For example, for some path /users/, we define an operation get, and underneath the parameters and responses section, a security section can be added containing an array of Security Scheme Objects to be applied. If we define a BasicAuth object and assign it to the get /users/ endpoint, other developers know that the operation should have basic authentication.

My main takeaway from the article was that my perception of API security was flawed. Whereas I had considered the idea of implementing security measures directly into the API itself, the article outlines instead that security measures in an API consist primarily as a guide or a definition of security requirements for other developers to uphold. For example, authentication itself is not performed inside our REST API by implementing one of these Security Scheme Objects. Rather, the API designer can specify that some specified authentication should be included in certain operations by defining them in the API.

While security measures in API design may not be as essential as I believed, the article asserts that it is a vital factor of API design. As I continue my career as a developer, I plan to develop all my applications in the most secure way possible. Since API design is such a fundamental aspect of web application development, I am glad to have gained some exposure to how security measures are implemented.

Reference: https://blog.stoplight.io/rest-api-security

From the blog CS@Worcester – Christian Shadis' Blog by ctshadis and used with permission of the author. All other rights reserved by the author.

Query String

Let me just begin by saying I really like how this blog post was organized and written. It starts off very formally by giving definitions for the terminology that they would use throughout the post and then transitions to explain those same terms in layman terms. With the way the post is written, the author makes it so even a person without a Computer Science background can semi follow what is going on in the post because the author explains the concepts using things that everyone sees and uses on a day-by-day basis.

The post starts with an example of a web URL and how the statement after the web extension is actually a query string. Then the article goes on to talk about how you can format your search to look for a specific pattern, and max or minimum string length. After going over the basics of how to format query strings, they go to talk about how you can combine multiple criteria to further down your search.

For example:

Somedictonary.com/words/?letter=5&partOfSpeech=noun

This finds you all of the five-letter nouns in the dictionary.

The reason why I chose this particular blog post to read and talk about this week is because it is relevant to what we are learning in class. It is a topic we are going to go over in class and have already talked a little bit about this topic in class. In addition, we are using also using it this week for the homework. I chose this post because I think it does a great job explaining the topic. The post is short and worded in a way that I think is easy to understand. It also uses a lot of images and examples, so it was also easy to follow along with what the author was saying about the topic.

In class when we started talking about this topic, I did not think it was particularly difficult to understand, but another reason I chose this blog post is that I have always been the kind of person where I either “use or lose it”. I have always been the kind of person where whenever I learn something new, I need to apply that information or forget about it.

In class, I immediately made the connection that you can use query strings to refine searches in databases but after reading this blog post, I learned you can also apply it to websites. In a way, I think I have always known this because often times when I am navigating a website and want to go from one page to the next, I may just change the page number or page entry in the URL. I don’t think I would have put two and two together and realized on my own that what I was doing was modifying the query string or that I was switching from one endpoint to another.

From the blog CS@Worcester – Just a Guy Passing By by Eric Nguyen and used with permission of the author. All other rights reserved by the author.

Notes on “Notes on the Errors of TEX”

Having just read a short keynote address* by Donald Knuth from 1989 about the development of his typesetting system, TeX, I’m struck by how little the core methodology of computer programming has changed in the last thirty years.

Students and new programmers can struggle with the difference in scale between real-world applications and textbook examples. Knuth feels that his typesetting system, TeX, is a useful illustration to beginners because it is a “medium-size” project: small enough for one person to understand given a reasonable amount of effort, but large enough to draw meaningful architecture and design lessons from.

Here, he outlines a number of lessons he took from the process of writing TeX. There’s nine, and they are only discussed briefly here (this is a summary of a much larger paper he wrote, which is simply called “The Errors of TeX.” I think they’re generally all very good, but for the sake of brevity I’m only going to focus on one of them.

Knuth’s first lesson here, which I think is the most important, is that it is not enough to merely specify a design. The implementation isn’t just physically necessary to create the product, but the actual process of implementing it is where the most important information about the design comes from. This seems maybe so obvious that it’s not worth actually saying, but I think it’s important and easy to lose sight of. Looking back on it, I think that personally, my greatest stumbling block in programming is what I would call overdesign. I would sit down and construct a conceptually perfect model in my head, and then it would fall apart on me as I attempted to implement it without me really understanding why.

At the time, anyway. Looking back now, I have a pretty clear idea, and Knuth expresses a similar idea here. The problem is that I viewed implementation as an externality to the design process rather than as one of the most significant components of it. Fundamentally, the purpose of thinking about software architecture is to minimize difficulty in implementation, and so actually writing the code is the most important source of data in the process. For example, it’s one thing to know, vaguely, that Microsoft’s DirectSound library can provide audio playback functionality to an application. But knowing specifically that it needs a ring buffer and understanding all the small, tedious details in setting it up might push you in the direction of architecting your program a specific way, or using a different audio library if you’d rather not. Granted, I guess it’s possible that you might come to the same conclusions simply looking at the documentation, but my mind doesn’t work that way, and I suspect that experience is generally a better teacher.

*Knuth, Donald E. (1989). “Notes on the Errors of TeX” (PDF).

From the blog CS@Worcester – Tom's Blog by Thomas Clifford and used with permission of the author. All other rights reserved by the author.

Technical Debt/Cruft

Hello everyone,

https://dev.to/6figuredev/episode-207-tech-debt-vs-cruft

This podcast in particular talks about technical debt in detail, as well as how to handle it. It introduced me to a term I had never head before called Cruft. Cruft being not traditional technical debt but specifically a type of debt where the developer who wrote the code did not make the effort to make the code clean and or work well. I never made the distinction in my own mind as to consider the possibility that the lack of best practices or simply writing the code wrong could be considered as that type of technical debt.

Now I don’t mean the developer was malicious or perhaps lazy, while it could be considered a possibility it seems to be not a scenario that happens enough to be something that requires a term like that. I also like having that extra terminology as technical debt as a definition can be left to an intentional design choice that leads to the creation of technical debt that most times are made in consensus rather than specifically one user not knowing how to code it or that user writing poorly designed code.

Cruft also gives a more specific term for a business to focus more on as it has the greater debt value for the company when needing to rework your product. If the code is rushed and written poorly, when errors arise the pressure is greater on the development team and will lead to failure or delays.

With Cruft as the core of that perspective on technical debt, regular technical debt can be just the debt that was planned and agreed upon allowing you to release and work on it later.

This podcast also introduced me to another great term that is separate to legacy code. With legacy code it’s old enough that it doesn’t meet the definition of Code Rot. Code Rot isn’t keeping up with older code so old as to be legacy code, but code that hasn’t been touched in a while and isn’t kept up to date with the newer builds and similar code design.

From this podcast it gave me an interesting mindset when thinking about a production cycle and acknowledging technical debt as kind of being inevitable and something to try to plan before sprints as apart of the team and being prepared for it. Cruft is really what I thought technical debt to actually be, because I was only really reflecting on code being written by one person. I for some reason ignored the aspect of working on the development team and it has changed my idea of development. Cruft really is just a more personal or workspace issue. One being fighting against the urge to take the easy way out and the other being a pressure outside what I as a developer couldn’t control but rather a business forcing the cruft to occur by forcing shortened deadlines.

From the blog CS@Worcester – A Boolean Not An Or by Julion DeVincentis and used with permission of the author. All other rights reserved by the author.

JSON

Summary:

This article gives us insight into JSON and why it is a popular format that we can take advantage of. We learn about what JSON is, the history of JSON and how it came to be, JSON structure, syntax, and usage, the benefits of JSON, and how to use JSON. After reading this, we should have a better understanding of how JSON works.

Reason:

The reason why I chose this article was because now since we are working with REST APIs in class, I felt that it was important to find an article that would assist with that. There’s a lot we still have to learn with JSON and after reading this I hope it helps anyone who reads this to understand the syntax and structure of JSON.

What I learned:

JavaScript Object Notation (JSON) is a data format that was built to be easily readable for both humans and computers. It has become the most popular way for applications to exchange data, and its most commonly encountered when working with APIs. JSON is a data interchange format that is easy to parse and generate. JSON is an extension of the syntax used to describe object data in JavaScript. It has a text format that uses object and array structures for the portable representation of data. All modern programming languages support these data structures, making JSON completely language independent. JSON is commonly associated with REST services, especially for APIs on the web. Although the REST architecture for APIs allows for any format, JSON provides a more flexible message format that increases the speed of communication. The structure of a JSON object are curly braces {} hold objects, the data are in key, value pairs, square brackets [] hold arrays, each data element is enclosed with quotes if it‘s a character, or without quotes if it is a numeric value, and commas are used to separate pieces of data. JSON data types are string, number, object, array, boolean, and null. The benefits of JSON are that it’s compact and efficient format, easily readable, broadly supported, self-describing, and has a flexible format. YAML is a superset of JSON that’s primarily designed to support more complex requirements by allowing user-defined data types as well as explicit data typing. Most popular programming languages have built in support to parse JSON, but they also include a plethora of JSON manipulation functions including the ability to modify JSON, save to and read from JSON files, and convert common data objects into data formats. JSON has become a vital tool of the developer arsenal because of its ease of use, efficiency, and flexibility.

Source: https://www.nylas.com/blog/the-complete-guide-to-working-with-json/

From the blog CS@Worcester – Life as a CS Student by Dylan Nguyen and used with permission of the author. All other rights reserved by the author.

Software Construction Log #5 – Understanding Port Mapping in Docker

          In a previous post regarding containerization, I briefly mentioned the uses of containers regarding application development and deployment prior to specifically learning how to utilize Docker and its containers. I went on to explain how Docker can handle development of applications through the use of images that contain the dependencies needed to be used for development, as well as how Docker also manages data retention between the local host system and the container file system through the use of volumes and mount binds. My learning of Docker started with a focus on local application development and deployment before moving onto utilizing ports to access the running applications through the network via a port number.

          By default, ports in a Docker container are not published and thus cannot be accessed this way, even if the container itself is up and running. This means that if one needs to access an application from port 5000 of the local host, this would not be possible given that such port may have not been published in the first place. Therefore, an important part for deploying web-based applications through docker involves utilizing the container’s ports. Such concept is referred to as “port mapping”, in which case ports of the localhost are mapped to specific ports in the container. This is a useful concept in the case that, during development, multiple containers that are running may need to use the same ports, therefore different ports of the localhost are mapped to that specific port that needs to be used and directing traffic appropriately, thus avoiding any potential port conflicts between containers.

          As I was researching for resources specifically talking about port mapping and port forwarding on Docker containers, I came across the following article named Understanding Docker Port Mapping to Bind Container Ports on LearnItGuide.Net. In this article, the author goes through the process of explaining how to map docker ports by demonstrating how the process is done through the command line by using a basic application and multiple containers as an example. Moreover, they show the multiple ways a port value can be expressed, which ways include either passing the port number directly (thus needing to access the application through localhost:port_number), by passing the IP of the host along with the specified port number that needs to be mapped, or by using automapping and thus mapping a random host port to the docker container. However, it is important to note that port mapping can also be applied using docker-compose YAML files. In an article named Understanding Docker Port Mappings on Dev-Diaries.Com, the author explains how port mapping works when using docker-compose files while showing examples of the ways we can map host ports to docker ports.

          Although there is more to port handling, such as exposing ports to only be accessible to linked services, it is still important to know how ports work in Docker containers in order to properly utilize them in the process of development and deployment.

Direct links to the resources referenced in the post: https://www.learnitguide.net/2018/09/understanding-docker-port-mapping.html and

Recommended materials/resources reviewed related to Docker port publishing:
1) https://www.dev-diaries.com/social-posts/docker-port-mappings/
2) https://betterprogramming.pub/how-does-docker-port-binding-work-b089f23ca4c8
3) https://www.tutorialspoint.com/docker/docker_managing_ports.htm
4) https://www.ctl.io/developers/blog/post/docker-networking-rules
5) https://nickjanetakis.com/blog/docker-tip-59-difference-between-exposing-and-publishing-ports
6) https://riptutorial.com/docker/example/2266/binding-a-container-port-to-the-host
7) https://www.dev-diaries.com/social-posts/docker-port-mappings/
8) https://www.whitesourcesoftware.com/free-developer-tools/blog/docker-expose-port/
9) https://digitalthoughtdisruption.com/2020/09/28/docker-port-mappings/

From the blog CS@Worcester – CompSci Log by sohoda and used with permission of the author. All other rights reserved by the author.