Category Archives: CS-343

Communication is Key

Over the past week, as we were talking about APIs and working on the backend to web applications, I was wondering about how the front end of these applications actually interacts with the backend software that we were working on. So I looked for something to help aid my understanding of frontend and backend interactions. In doing so, I found a post on vsupalov.com called How Does the Frontend Communicate with the Backend? that covers this area of web design.

While there is little discussion of APIs and their use in this process, I still think it is important to understand how the frontend and backend of a server interact in order to get a complete picture of how web applications work. The post starts off by explaining the basics of how the structure works and the definitions of terms, with the frontend being made of mostly HTML, CSS, and some JavaScript, and being run on the browser, which mostly interprets and renders data received from the backend. The backend receives HTTP requests from the frontend and compiles data to be sent and rendered in the frontend. The post then explains how these two components interact, through the packaging and sending of HTTP requests and responses. These generally are in JSON format, but could just be HTML, images, or any other files or codes. The final part of the post gives examples of how these systems work together to bring about the best possible user experience, by using the speed of the backend running on a host server to process how the user views a site, without having to use the much slower and choppier processing speed of the browser application. These examples include interacting with databases and how sites can render information server-side for SEO or performance reasons.

When I was working on websites, this kind of architecture would have saved me and the company a lot of time, I spent countless hours reducing the size of pictures to improve website performance, all of which could have been saved by using HTTP requests to dynamically load images when they were needed from a backend database. And the role of APIs in this process cannot be understated, facilitating this transfer of information between frontend and backend through properly formatted requests and responses. This gave me a much more clear understanding of how the interactions between the frontend and backend of web applications are processed, and why it is such an important part of proper web design. In the future, I feel that this information will help me have a better grasp of how these systems work, and how to properly utilize them to create better user experiences online.


Source: https://vsupalov.com/how-backend-and-frontend-communicate/

From the blog CS@Worcester – Kurt Maiser's Coding Blog by kmaiser and used with permission of the author. All other rights reserved by the author.

Software Construction Log #6 – Introducing APIs and Representational State Transfer APIs

            When the topic of interfaces is brought up, the concept of User Interfaces tends to come to mind more times than not, given how we tend to utilize interfaces to exchange information between an end-user and a computer system by providing an input request and receiving data as an output result. Simply put, we often think of interfaces as the (often visual) medium of interaction between a user and an application. While this is true, such interactions are not limited between end users and applications, as it is possible for other applications to interact with other applications by sending or receiving information to each other to carry out a certain function. Such interface is an Application Programming Interface (API), which is a set of functions used by systems or applications to allow access to certain features of other systems or applications. One such case is the use of social media accounts to log-in to one’s Gitlab account, in this case Gitlab’s API will check if a certain user is logged into a specified social media account and has a valid connection first before allowing access to a Gitlab account. For this blog post, I want to focus on mostly web APIs.

            There are, however, three different architecture styles, or protocols, used for writing APIs, therefore there is not once single way of writing the API that an application will use, and different advantages and trade-offs need to be considered when choosing a specific API protocol as the standard for an application. The styles used for writing APIs are the following:
1. Simple Object Access Protocol (SOAP)
2. Remote Procedural Call (RPC)
3. Representational State Transfer (REST)

Among the above protocols, REST seems to be the most widely used style for writing APIs. REST provides the standards constraints utilized for interactions and operations between internet-based computer systems. The APIs of applications that utilize REST are referred to as “RESTful APIs” and tend to utilize HTTP methods, XML for encoding, and JSON to store and transmit data used in an application. Although writing RESTful APIs cannot exactly be considered as programming in the same way writing an application in JAVA is, such APIs still utilize some level of scriptwriting and creating endpoints for an API still utilizes specific syntax when specifying parameters and what values they must contain. One article that I came across when researching tutorials is titled A Beginner’s Tutorial for Understanding RESTful API on MLSDev.Com uses an example in order to show how RESTful architecture design works for RESTful APIs. In this example, the author Vasyl Redka, proceeds to show an example of a response to a request, which HTTP methods and response codes are utilized, along with how Swagger documentation is utilized when writing APIs.

            Though RESTful API may be somewhat confusing at first due to how the approach of writing APIs differs to the approach used for writing code, being able to effectively write APIs for web-based applications can be a rather significant skill for web-based application development.

Direct link to the resource referenced in the post: https://mlsdev.com/blog/81-a-beginner-s-tutorial-for-understanding-restful-api

Recommended materials/resources reviewed related to REST APIs:
1) https://www.redhat.com/en/topics/api/what-is-a-rest-api
2) https://www.ibm.com/cloud/learn/rest-apis
3) https://www.tutorialspoint.com/restful/index.htm
4) https://spring.io/guides/tutorials/rest/
5) https://searchapparchitecture.techtarget.com/definition/REST-REpresentational-State-Transfer
6) https://www.developer.com/web-services/intro-representational-state-transfer-rest/
7) https://www.techopedia.com/definition/1312/representational-state-transfer-rest
8) https://searchapparchitecture.techtarget.com/tip/What-are-the-types-of-APIs-and-their-differences
9) https://www.plesk.com/blog/various/rest-representational-state-transfer/

From the blog CS@Worcester – CompSci Log by sohoda and used with permission of the author. All other rights reserved by the author.

OBJECT ORIENTED PROGRAMMING

OBJECT ORIENTED PROGRAMING:

Object-Oriented Programming (OOP) is all about creating “objects”. An object is a group of variables and related functions. These variables are often referred to as properties of the object and functions are referred to as the behavior of the objects. These objects provide the best and most straightforward software design.

Principles of OOP.

Object-oriented programming is based on the following principles:

  • Encapsulation. 

This principle states that all important information is in the object and only selected information is disclosed. The implementation and state of each object is done privately within a specific class. Other objects do not have permission to access this class or the authority to make changes. This characteristic of data encryption provides more software security and avoids unexpected data corruption.

  • Abstraction

Objects only display internal mechanisms that are necessary for the use of other objects, to hide any unnecessary implementation code. The resulting class can be extended its functionality. This concept can help developers to make additional changes or additions more easily over time.

  • Inheritance.

Classes can reuse code from other classes. Relationships and small bridges between objects can be handed over, enabling developers to reuse common logic while continuing to maintain a unique bridge. This OOP property compels more detailed data analysis, reduces processing time and ensures a high degree of accuracy.

  • Polymorphism.

 Objects are designed to share behavior and can take up more than one form. The program will determine what meaning or use is necessary for each implementation of an item from the parent class, thus reducing the need to repeat the code. Then a child class is created, which enhances the functionality of the parent class.

Advantages of using OOP

  1. OOP Allows parallel development.

If you are working with software teams, then everyone can work independently once the standard classes have been resolved. That allows a relative level of parallel growth that would not have been achieved otherwise.

  • Module classes can be used again and again.

Once module classes are created, they can be reused in programs or other projects. Sometimes, little-to-no adjustments are necessary for the next project. That gives the team more flexibility as it crosses the starting point.

  • Coding is easy to maintain.
  • With OOP, because your coding is centralized, it is easy to create a code that can be maintained. That makes it easier to store your data when you need to make updates.

Disadvantages of using OOP.

  1. It can be ineffective.

Targeted programming tends to use more CPUs than alternative options. That can make it an inconvenient option when there are technical limitations involved because of the size that can be exhausted. Because of the duplication involved, the initial coding may be larger than the other options as well.

  • It can be very large.

If OOP is left unchecked, then it can create a large number of complete, unwanted code. When that happens, costs go up and that makes it harder to keep costs down.

 

REFERENCES:

  1. https://info.keylimeinteractive.com/the-four-pillars-of-object-oriented-programming
  2. https://medium.com/@cancerian0684/what-are-four-basic-principles-of-object-oriented-programming-645af8b43727

From the blog CS@Worcester – THE SOLID by isaacstephencs and used with permission of the author. All other rights reserved by the author.

THE SOLID

SOLID

 SOLID is an acronym that stands for five key design principles: Single responsibility principle, Open-closed principle, Liskov substitution principle, Interface segregation principle, and Dependency inversion principle. All five are used by software engineers and provide significant benefits to developers.

 

Principles of SOLID:

  • SRP – Single Responsibility Principle

SRP states that, “a class should have one, and only one, reason to change.” Following this principle means that each class does only one thing and each class or module is responsible for only one part of the program functionality. For simplicity, each class should solve only one problem.

Single responsibility principle is a basic principle that many developers already use to create code. It can be used for classes, software components, and sub-services.

Applying this principle simplifies testing and maintaining the code, simplifies software implementation, and helps prevent unintended consequences of future changes.

  • OCP – Open/Closed Principle

OCP states that software entities (classes, modules, methods, etc.) should be open for extension, but closed for modification.

Practically this means creating software entities whose behavior can be changed without the need to edit and compile the code itself. The easiest way to demonstrate this principle is to focus on the way it does one thing.

  • LSP – Liskov Substitution Principle

LSP states that “any subclass object should be substitutable for the superclass object from which it is derived”.

While this may be a difficult principle to put in place, in many ways it is an extension of a closed principle, as it is a way to ensure that the resulting classes expand the primary class without changing behavior.

  • ISP – Interface Segregation Principle

ISP principle states that “clients should not be forced to implement interfaces they do not use.”

Developers do not just have to start with an existing interface and add new techniques. Instead, start by creating a new interface and then let your class implement as many interactions as needed. Minor interactions mean that developers should prefer composition rather than legacy and by separation over merging. According to this principle, engineers should work to have the most specific client interactions, avoiding the temptation to have one large, general purpose interface.

  • DIP – Dependency Inversion Principle

DIP – Dependency Inversion Principle states that “high level modules should not depend on low level modules; both should depend on abstractions. Abstractions should not depend on details.  Details should depend upon abstractions”

This principle offers a way to decouple software modules. Simply put, dependency inversion principle means that developers should “depend on abstractions, not on concretions.”

Benefits of SOLID principle

  • Accessibility
  • Ease of refactoring
  • Extensibility
  • Debugging
  • Readability

REFERENCES:

  1. https://www.bmc.com/blogs/solid-design-principles/

From the blog CS@Worcester – THE SOLID by isaacstephencs and used with permission of the author. All other rights reserved by the author.

Working in containers

Today we will be focusing on containers and why containers have become the future of DevOps. For this we will be looking at a blog by Rajeev Gandhi and Peter Szmrecsanyi which highlights the benefits of containerization and what it means for the developers like us.

Containers are isolated unit of software running on top of OS i.e., only applications and their dependencies. Containers do not need to run full operating system; we have been using Linux operating system kernel through docker and using capabilities of our local system hardware and OS (windows or macOS). When we remote accessed SPSS and logic works through virtual machine, VM came loaded with full operating system and its associated libraries which is why they are larger in size and much slower compared to containers that are smaller in size and faster. Containers can also run anywhere since docker (container) engine supports almost all underlying operating systems. Containers can also work consistently across local machines and cloud making containers highly portable.

We have been building containers for our DevOps by building and publishing container (docker) images. We have been working on files like api and backend in development containers preloaded with libraries and extensions like preview swagger. We then make direct changes in the code and push them into containers – this can lead to potential functionality and security risks. Therefore, we can change the docker image itself. Instead of making code changes on backend, we are building image with working backend code and then coding on frontend. This helps us avoid accidental changes to a working backend, but we must reconstruct the container if we are making changes to the container image.

Containers are also highly compatible for deploying and scaling microservices, which are applications broken into small and independent components. When we will be working on Libre food pantry microservices architecture, we will have five to six teams working independently on different components of the microservices in different containers giving us more development freedom. After an image is created, we can also deploy a container in matter of seconds and replicate containers giving developers more experimentation freedom. We can try out minor bug fixes, or new features and even major api changes without the fear of permanent damage on the original code. Moreover, we can also destroy a container in matter of seconds. This results in faster development process which leads to quicker releases and upgrades to fix minor bugs.

Source:

https://www.ibm.com/cloud/blog/the-benefits-of-containerization-and-what-it-means-for-you

https://www.aquasec.com/cloud-native-academy/docker-container/container-devops/

From the blog CS@worcester – Towards Tech by murtazan and used with permission of the author. All other rights reserved by the author.

Security in API’s

As we continue to work with API’s, I have decided to dedicate this blogpost to talk about security with API’s as eventually we will have to take this into consideration when we go into more in depth with them in the future. Security will always stay priority in which I think it would be helpful to look at this area now. I have chosen a blog post that gives us some good practices we can look at to help better our API’s.

In summary, this authors first goes over TLS which stands for Transport layer security which is a cryptographic protocol used to help prevent tampering and eavesdropping by encrypting messages sent to and from the server. The absence of TLS means that third parties get easy access to private information from users. TLS can be found when the website URL contains https and not just http such as the very website you are reading from now. Then they go over Oauth2 which is a general framework and use that with single sign-on providers. This helps manage third party applications access and usage of data on the behalf of the user. This is used in situations such as granting access to photos to used in a different application. They go in depth over codes and authentication tokens with Oauth2. Then they go over API keys where we should set the permissions on those and don’t mismanage those. They say at the end to just use good, reliable libraries that you can put most of the work and responsibility onto so that we can can minimize the mistakes we can make.

These practices will help bolster my knowledge on the security placed within the API’s we are working with. This blogpost has helped me learn more on the general framework on what security measures are placed in the general API landscape. TLS looks to be a standard protocol used in 99% of websites you see but also makes me wonder on all of the websites that I have traversed through that didn’t have TLS and you should too and make sure that you have no private information in danger on those sites. This also makes me wonder on how TLS is implemented such as with the LibrePantry API that is being worked on if it is being used there(hopefully). Then perhaps when we work further in the API’s, we get to see the security practices implemented.

Source: https://stackoverflow.blog/2021/10/06/best-practices-for-authentication-and-authorization-for-rest-apis/

From the blog CS@Worcester – kbcoding by kennybui986 and used with permission of the author. All other rights reserved by the author.

Docker Images

Resource Link: https://jfrog.com/knowledge-base/a-beginners-guide-to-understanding-and-building-docker-images/

For this post I wanted to explore and delve into what exactly docker images are, how they’re composed, and how they’re interpreted within docker. I wanted to investigate this topic because while I understand at a high level how docker images work and what they’re used for, I don’t have a deeper understanding of their exact functioning and inner workings. My goal is to attain a better understanding of docker images so that I may understand the full scope of their functionality and uses. The purpose of this is so that I can better make use of docker images, because while right now I can use docker images just fine, I can’t utilize the more advanced functions and uses of them.

Basically, a docker image is a template that has instructions for creating a docker image. This makes it an easy way of creating readily deployable applications or servers that can be built instantly. Their pre-packaged nature makes it easy for them to be shared amongst other docker users, for example on a dev team, or to make a copy of a certain version of an application or server that’s easy to go back to or redistribute. In this way, Docker Hub makes it easy for users to find images that other users have made, so a user can just search for an image that fits their needs and take use of it with little to no configuration needed.

A docker image isn’t one file, it’s composed of many files that can be composed to form a docker image. Some examples of these files are instillations, application code, and dependencies that the application may require. This plays into the pre-packaged nature of a docker image, as all the code, instillations, and dependencies are included within the docker image, meaning that a user doesn’t need to search elsewhere for the dependencies required to configure a docker image. This also creates a level of abstraction, as all the user needs to know is the image that they want to use, and how to configure it, but they could have little knowledge of the code, instillations, or dependencies required for that docker image because the image handles it all automatically.

The way to build a docker image is either through interactive steps, or through a Dockerfile. For the interactive steps they can be run either through the command line or compiled into a single file that can be run to execute the commands. For a Dockerfile, all the instructions to build the docker image are within the file in a similar format for each image. There are advantages and disadvantages to each of the methods, but the more common and standard way is through a Dockerfile. With a Dockerfile, the steps to build a docker image are similar and repeatable for each image. This makes it easier to maintain within the image’s lifecycle, and it allows for easier integration into continuous integration and continuous delivery processes. For these reasons it is preferable for a large scale application, often being worked on by multiple people. The advantages of creating a docker image through interactive steps is that it’s the quickest and simplest way to get a docker image up and running. It also allows for more fine tuning over the process, which can help with troubleshooting. Overall, creating a docker image with a Dockerfile is the better solution for long term sustainability, given that you have the setup time required to properly configure a Dockerfile.

From the blog CS@Worcester – Alex's Blog by anelson42 and used with permission of the author. All other rights reserved by the author.

Software Framework

A framework is similar to an application programming interface (API), though technically a framework includes an API. As the name suggests, a framework serves as a foundation for programming, while an API provides access to the elements supported by the framework. Also, a framework may include code libraries, a compiler, and other programs used in the software development process. There are two types of frameworks which are: Front end and back end.

The front end is “client-side” programming while a back end is referred to as “server-side” programming. The front-end is that part of the application that interacts with the users. It is surrounded by dropdown menus and sliders, is a combo of HTML, CSS, and JavaScript being controlled by our computer’s browser. The back-end framework is all about the functioning that happens in the back end and reflects on a website. It can be when a user logs into our account or purchasing from an online store.

Why do we need a software development framework?

Software development frameworks provide tools and libraries to software developers for building and managing web applications faster and easier. All the frameworks mostly have one goal in common that is to facilitate easy and fast development.

Let’s see why these frameworks are needed:

  1. Frameworks help application programs to get developed in a consistent, efficient and accurate manner by a small team of developers.
  2. An active and popular framework provides developers robust tools, large community, and rich resources to leverage during development.
  3. The flexible and scalable frameworks help to meet the time constraints and complexity of the software project.

Here are some of the importance of the framework. Now let’s see what are the advantages of using a software framework:

  1. Secure code
  2. Duplicate and redundant code be avoided
  3. Helps consistent
  4. developing code with fewer bugs
  5. Makes it easier to work on sophisticated technologies
  6. Applications are reliable as several code segments and functionalities are pre-built and pre-tested.
  7. Testing and debugging the code is easier and can be done even by developers who do not own the code.
  8. The time required to develop an application is less.

I chose this topic because I have heard many times about software frameworks and was intrigued by learning more about them, what they are, how they work, and what their importance is in the software development field. Frameworks or programming languages are important because we need them to create and develop applications.

Software Development Frameworks For Your Next Product Idea (classicinformatics.com)

Framework Definition (techterms.com)

From the blog CS@Worcester – Software Intellect by rkitenge91 and used with permission of the author. All other rights reserved by the author.

Blog Post 5 – SOLID Principles

When developing software, creating understandable, readable, and testable code is not just a nice thing to do, but it is a necessity. This is because having clean code that could be reviewed and worked on by other developers is an essential part of the development process. When it comes to object oriented programming languages, there are a few design principles that help you avoid design smells and messy code. These principles are known as the SOLID principles. These principles were originally introduced by Robert J. Martin back in 2000. SOLID is an acronym for five object oriented design principles. These principles are:

  1. Single Responsibility Principle – A class should have one and only one reason to change, meaning that a class should have only one job. This principle helps keep code consistent and it makes version control easier.
  2. Open Closed Principle – Objects or entities should be open for extension but closed for modification. This means that we should only add new functionality to the code, but not modify existing code. This is usually done through abstraction. This principle helps avoid creating bugs in the code.
  3. Liskov Substitution Principle – Let q(x) be a property provable about objects of x of type T. Then q(y) should be provable for objects y of type S where S is a subtype of T. This means that subclasses can substitute their base class. This is expected because subclasses should inherit everything from their parent class. They just extend the parent class, they never narrow it down. This principle also helps us avoid bugs.
  4. Interface Segregation Principle – A client should never be forced to implement an interface that it doesn’t use, or clients shouldn’t be forced to depend on methods they do not use. This principle helps keeps the code flexible and extendable.
  5. Dependency Inversion Principle – Entities must depend on abstractions, not on concretions. It states that the high-level module must not depend on the low-level module, but they should depend on abstractions. This means that dependencies should be reorganized to depend on abstract classes rather than concrete classes. Doing so would help keep our class open for extension. This principle helps us stay organized as well as help implement the Open Closed Principle.

These design principles act as a framework that helps developers write cleaner, more legible code that allows for easier maintenance and easier collaboration. The SOLID principles should always be followed because they are best practices, and they help developers avoid design smells in their code, which will in turn help avoid technical debt.

https://www.digitalocean.com/community/conceptual_articles/s-o-l-i-d-the-first-five-principles-of-object-oriented-design#single-responsibility-principle

https://www.freecodecamp.org/news/solid-principles-explained-in-plain-english/

From the blog CS@Worcester – Fadi Akram by Fadi Akram and used with permission of the author. All other rights reserved by the author.

Best Docker Tutorial IMHO

While in the process of trying to learn Docker as part of my Software Architecture class, I watched 5 YouTube videos. YouTube has been a real benefit to me for learning college level science courses. I have had good luck with Biology, Organic Chemistry, and even Statistics tutorials, but have not had much luck so far with Computer Science videos. There are some good ones out there, though, and “Docker Tutorial for Beginners – A full DevOps course on how to run applications in containers [1] is certainly one the best I have seen.

This 2-hour course by Mumshad Mannambeth covers all the bases in a clear and interesting manner and is enhanced with well thought out structure and graphics. The lectures are accompanied by direct access to a full-fledged Docker environment through labs [2] on a website

The course is broken up into sections exploring the definitions of containers, images, Docker itself, why you need it, and what you can do with it. He then goes on to explain the basic commands needed to create a docker image, how to build and run it in a container, the basic concepts of Docker Compose, YAML, and DockerFile.

He leaves the technical details of installing Docker Desktop for Windows and the MAC until later in the video, giving more time up front to clearly describe why you want to use it, what it accomplishes for the software development community, and how containerization is the future of enterprise level software. These installation sections are also well done but are not relevant for those who already have docker installed, or for those without the time to build their own environments. The tutorial and accompanying interactive quizzes on the right side of the site, are resources I will come back to in the future, because of their depth and clarity.

He then allocates about 40 minutes going into docker concepts and commands in depth and follows up with a history and description of the importance of continuous integration using container orchestration tools like Swarm and Kubernetes. He clearly lays out the architecture of a system that is complex, distributed, fault-tolerant, and easy to implement. He details the importance of DevOps, where the design, development, testing, and operations teams are seamlessly connected and have a symbiotic relationship. This makes everyone’s jobs easier, cuts down on departmental finger pointing when things go wrong, and brings product to market much quicker and with less bugs shipped.

He also covers the following areas:

1. Layered architecture

2. Docker registry

3. Controlling volumes

4. Port forwarding

5. Viewing logs

6. The advantages of container architectures over Virtual Machines and Hypervisors.

7. KubeCtrl

I was pleasantly surprised to have found this. Maybe I should give the computer science YouTube community more credit.

References:

[1]  YouTube link:

[2]  Tutorial links:

 www.freecodecamp.org

 www.kodecloud.com

Exhibits:

(1) Why do you need Docker?

(2) Container orchestration tools

(3) Layered architecture

(4) Hand-on Lab

From the blog cs@worcester – (Twinstar Blogland) by Joe Barry and used with permission of the author. All other rights reserved by the author.