Front end vs Back end Article

The frontend of an api or web system is the visual representation/interface that is seen and used by visitors, customers, etc. You interact with the frontend through your browser so frontend development is very important for any online website or service. When creating the frontend of a system the developers use HTML, CSS and Javascript which are the same tools used to make web apps. Frontends are very different from backends as the user does not see the backend or really necessarily interact with it. The backend is what happens on/in what it is developed for. Backends govern the functions and any operation that takes place within the system.

Backends are developed using different languages based on the developer or customers preferences. The backend also consists of the web server and storage of the system which it supports. But even though the backend contains what makes the site/app function, the front end is still just as important as it is what is presented directly to the users. Through mainly html front ends bring the actual user experience into play and help to dictate the functionality of a website even though the frontend is not directly involved in the execution of the functions.

I chose this article as it was very helpful when it comes to understanding what exactly a front end is and why it is important when developing something such as an app or website as it truly is the entire user experience. The user experience overall can make or break an app or any other type of programming which requires a front end and therefore the front end development process is very important.

I feel as though an article like this would be helpful not only to myself but to my peers as it thoroughly explains what a front end is as well as what a backend is. This helps fully differentiate the two so that there is no confusion in the development stages of either. Both frontends and backends are similar but also drastically different in how they are developed and the purpose they serve. This article helps to differentiate the two and helps the reader gain an understanding of what is included in each in order to develop an effective system that thrives for both the developers of the system and the users of the system. I would recommend this article to anyone questioning the difference between frontend and backend as well as anyone wondering how to develop an effective api in general.

From the blog CS@Worcester – Dylan Brown Computer Science by dylanbrowncs and used with permission of the author. All other rights reserved by the author.


For this week’s blog post I have found an article on REST API Design. REST or RESTful API design (Representational State Transfer) is designed to use and take advantage of existing protocols. Even though REST API can be used for almost any protocol, normally you will see this design in HTTP web APIs. This design was defined by Dr. Roy Fielding and became notable due to its layer of flexibility. REST can use and execute multiple types of calls, due to the data not being tied to any methods or resources. For your API to be considered “RESTful”, it will need to follow each of the six key constraints. The first thing that it will need is a Client-Server. This is the idea of separating the client and the server from each other and makes it so work can be done independently from each other. For example, I should be able to work on my server without it effecting the mobile client. Secondly your API will need to be stateless. MuleSoft, the company where the article is from, says on stateless API, “REST APIs are stateless, meaning that calls can be made independently of one another, and each call contains all of the data necessary to complete itself successfully. A REST API should not rely on data being stored on the server or sessions to determine what to do with a call, but rather solely rely on the data that is provided in that call itself.” (MuleSoft). For example, I should be able to grab and orders information from an order database by calling its order ID rather than all the information itself due to the APIs design. The third thing that it will need is cache. Cache in computing, is a piece of hardware or software that will store data for future use. For example, an autofill password you would use on Google Chrome. For your API to be truly RESTful it will need to have a way to store cacheable data. This will strongly help your API due to less interaction with it and you will be able to retrieve data quickly for your clients. The fourth thing you will need a Uniform interface. This means having an interface that is not coupled with the API layer. This allows the client to easily navigate the API without knowing the backend. The fifth concept for RESTful API is a layered system. According to MuleSoft, they say on Layered System, “As the name implies, a layered system is a system comprised of layers, with each layer having a specific functionality and responsibility” (MuleSoft). Lastly you will need to be able to code on demand. The concept for this idea is that Code on Demand will allow code or applets to be transmitted via the API. This concept is not widely used due to API being coded in multiple different languages and has been raising security questions.

From the blog CS@worcester – Michale Friedrich by mikefriedrich1 and used with permission of the author. All other rights reserved by the author.


While I was seeking an internship, I had run into a great friend who currently works as a software developer. He mentioned this method which is called CI/CD and told me that this is a great way to develop, deploy and maintain software that developers should know about. Following his world, I’ve done research about this topic and found what he said was entirely true. For developers who want to learn more about the software development process, this blog is for you. This blog is going to focus on what is CI/CD and what it does to the software development life cycle.

CI/CD is a method to frequently deliver apps to customers by introducing automation into the stages of app development. There are three main concepts developed to CI/CD are continuous integration, continuous delivery, and continuous deployment. CI/CD is a perfect solution along with Docker to solve the problem that new code can cause for development and operation teams. CI/CD introduces automation and continuous monitoring throughout the life cycle of apps, from integration and testing phases to delivery and deployment. This practice is referred to as “CI/CD pipeline” and is supported by development and operation teams working together in a responsive way to have the best approach.

CI/CD is also a software engineering practice where members of a team integrate their work with increasing frequency. To do so, teams strive to integrate at least daily and even hourly, approaching integration that happens continuously. CI emphasizes automation tools that help build and test, finally ultimately focusing on achieving a software-defined life cycle. CI is successful, build and integration effort drops, and teams can detect errors as quickly as practical. CD is to be packaging and deployment what CI is built and tested. A team can build, configure, and package software as well as rewrite its deployment to fit with its budget at any given time. As result, customers have more opportunities to experience and provide feedback on changes.

Continuous integration helps developers merge their code changes back to a shared branch, more frequently- sometimes even daily. Once the changes are merged, those changes are validated by automatically building the application and running through many levels of testing to ensure changes have not broken the app. CI makes it easier to fix bugs as quickly and often as possible.

While in continuous delivery, every stage-from the merger of code continuously changes to the delivery of production-ready builds-involves test automation and code release automation. The goal is to have a code-base that is always ready for deployment to a production environment.

Those things above are some benefits of CI/CD and personally, I found it very interesting to me as a future software developer. I think this method would help developers to maintain the best of their system as well as the foundation of security.


From the blog CS@Worcester – Nin by hpnguyen27 and used with permission of the author. All other rights reserved by the author.

Communication is Key

Over the past week, as we were talking about APIs and working on the backend to web applications, I was wondering about how the front end of these applications actually interacts with the backend software that we were working on. So I looked for something to help aid my understanding of frontend and backend interactions. In doing so, I found a post on called How Does the Frontend Communicate with the Backend? that covers this area of web design.

While there is little discussion of APIs and their use in this process, I still think it is important to understand how the frontend and backend of a server interact in order to get a complete picture of how web applications work. The post starts off by explaining the basics of how the structure works and the definitions of terms, with the frontend being made of mostly HTML, CSS, and some JavaScript, and being run on the browser, which mostly interprets and renders data received from the backend. The backend receives HTTP requests from the frontend and compiles data to be sent and rendered in the frontend. The post then explains how these two components interact, through the packaging and sending of HTTP requests and responses. These generally are in JSON format, but could just be HTML, images, or any other files or codes. The final part of the post gives examples of how these systems work together to bring about the best possible user experience, by using the speed of the backend running on a host server to process how the user views a site, without having to use the much slower and choppier processing speed of the browser application. These examples include interacting with databases and how sites can render information server-side for SEO or performance reasons.

When I was working on websites, this kind of architecture would have saved me and the company a lot of time, I spent countless hours reducing the size of pictures to improve website performance, all of which could have been saved by using HTTP requests to dynamically load images when they were needed from a backend database. And the role of APIs in this process cannot be understated, facilitating this transfer of information between frontend and backend through properly formatted requests and responses. This gave me a much more clear understanding of how the interactions between the frontend and backend of web applications are processed, and why it is such an important part of proper web design. In the future, I feel that this information will help me have a better grasp of how these systems work, and how to properly utilize them to create better user experiences online.


From the blog CS@Worcester – Kurt Maiser's Coding Blog by kmaiser and used with permission of the author. All other rights reserved by the author.

Software Construction Log #6 – Introducing APIs and Representational State Transfer APIs

            When the topic of interfaces is brought up, the concept of User Interfaces tends to come to mind more times than not, given how we tend to utilize interfaces to exchange information between an end-user and a computer system by providing an input request and receiving data as an output result. Simply put, we often think of interfaces as the (often visual) medium of interaction between a user and an application. While this is true, such interactions are not limited between end users and applications, as it is possible for other applications to interact with other applications by sending or receiving information to each other to carry out a certain function. Such interface is an Application Programming Interface (API), which is a set of functions used by systems or applications to allow access to certain features of other systems or applications. One such case is the use of social media accounts to log-in to one’s Gitlab account, in this case Gitlab’s API will check if a certain user is logged into a specified social media account and has a valid connection first before allowing access to a Gitlab account. For this blog post, I want to focus on mostly web APIs.

            There are, however, three different architecture styles, or protocols, used for writing APIs, therefore there is not once single way of writing the API that an application will use, and different advantages and trade-offs need to be considered when choosing a specific API protocol as the standard for an application. The styles used for writing APIs are the following:
1. Simple Object Access Protocol (SOAP)
2. Remote Procedural Call (RPC)
3. Representational State Transfer (REST)

Among the above protocols, REST seems to be the most widely used style for writing APIs. REST provides the standards constraints utilized for interactions and operations between internet-based computer systems. The APIs of applications that utilize REST are referred to as “RESTful APIs” and tend to utilize HTTP methods, XML for encoding, and JSON to store and transmit data used in an application. Although writing RESTful APIs cannot exactly be considered as programming in the same way writing an application in JAVA is, such APIs still utilize some level of scriptwriting and creating endpoints for an API still utilizes specific syntax when specifying parameters and what values they must contain. One article that I came across when researching tutorials is titled A Beginner’s Tutorial for Understanding RESTful API on MLSDev.Com uses an example in order to show how RESTful architecture design works for RESTful APIs. In this example, the author Vasyl Redka, proceeds to show an example of a response to a request, which HTTP methods and response codes are utilized, along with how Swagger documentation is utilized when writing APIs.

            Though RESTful API may be somewhat confusing at first due to how the approach of writing APIs differs to the approach used for writing code, being able to effectively write APIs for web-based applications can be a rather significant skill for web-based application development.

Direct link to the resource referenced in the post:

Recommended materials/resources reviewed related to REST APIs:

From the blog CS@Worcester – CompSci Log by sohoda and used with permission of the author. All other rights reserved by the author.

Security in API’s

As we continue to work with API’s, I have decided to dedicate this blogpost to talk about security with API’s as eventually we will have to take this into consideration when we go into more in depth with them in the future. Security will always stay priority in which I think it would be helpful to look at this area now. I have chosen a blog post that gives us some good practices we can look at to help better our API’s.

In summary, this authors first goes over TLS which stands for Transport layer security which is a cryptographic protocol used to help prevent tampering and eavesdropping by encrypting messages sent to and from the server. The absence of TLS means that third parties get easy access to private information from users. TLS can be found when the website URL contains https and not just http such as the very website you are reading from now. Then they go over Oauth2 which is a general framework and use that with single sign-on providers. This helps manage third party applications access and usage of data on the behalf of the user. This is used in situations such as granting access to photos to used in a different application. They go in depth over codes and authentication tokens with Oauth2. Then they go over API keys where we should set the permissions on those and don’t mismanage those. They say at the end to just use good, reliable libraries that you can put most of the work and responsibility onto so that we can can minimize the mistakes we can make.

These practices will help bolster my knowledge on the security placed within the API’s we are working with. This blogpost has helped me learn more on the general framework on what security measures are placed in the general API landscape. TLS looks to be a standard protocol used in 99% of websites you see but also makes me wonder on all of the websites that I have traversed through that didn’t have TLS and you should too and make sure that you have no private information in danger on those sites. This also makes me wonder on how TLS is implemented such as with the LibrePantry API that is being worked on if it is being used there(hopefully). Then perhaps when we work further in the API’s, we get to see the security practices implemented.


From the blog CS@Worcester – kbcoding by kennybui986 and used with permission of the author. All other rights reserved by the author.

Docker Images

Resource Link:

For this post I wanted to explore and delve into what exactly docker images are, how they’re composed, and how they’re interpreted within docker. I wanted to investigate this topic because while I understand at a high level how docker images work and what they’re used for, I don’t have a deeper understanding of their exact functioning and inner workings. My goal is to attain a better understanding of docker images so that I may understand the full scope of their functionality and uses. The purpose of this is so that I can better make use of docker images, because while right now I can use docker images just fine, I can’t utilize the more advanced functions and uses of them.

Basically, a docker image is a template that has instructions for creating a docker image. This makes it an easy way of creating readily deployable applications or servers that can be built instantly. Their pre-packaged nature makes it easy for them to be shared amongst other docker users, for example on a dev team, or to make a copy of a certain version of an application or server that’s easy to go back to or redistribute. In this way, Docker Hub makes it easy for users to find images that other users have made, so a user can just search for an image that fits their needs and take use of it with little to no configuration needed.

A docker image isn’t one file, it’s composed of many files that can be composed to form a docker image. Some examples of these files are instillations, application code, and dependencies that the application may require. This plays into the pre-packaged nature of a docker image, as all the code, instillations, and dependencies are included within the docker image, meaning that a user doesn’t need to search elsewhere for the dependencies required to configure a docker image. This also creates a level of abstraction, as all the user needs to know is the image that they want to use, and how to configure it, but they could have little knowledge of the code, instillations, or dependencies required for that docker image because the image handles it all automatically.

The way to build a docker image is either through interactive steps, or through a Dockerfile. For the interactive steps they can be run either through the command line or compiled into a single file that can be run to execute the commands. For a Dockerfile, all the instructions to build the docker image are within the file in a similar format for each image. There are advantages and disadvantages to each of the methods, but the more common and standard way is through a Dockerfile. With a Dockerfile, the steps to build a docker image are similar and repeatable for each image. This makes it easier to maintain within the image’s lifecycle, and it allows for easier integration into continuous integration and continuous delivery processes. For these reasons it is preferable for a large scale application, often being worked on by multiple people. The advantages of creating a docker image through interactive steps is that it’s the quickest and simplest way to get a docker image up and running. It also allows for more fine tuning over the process, which can help with troubleshooting. Overall, creating a docker image with a Dockerfile is the better solution for long term sustainability, given that you have the setup time required to properly configure a Dockerfile.

From the blog CS@Worcester – Alex's Blog by anelson42 and used with permission of the author. All other rights reserved by the author.

Software Framework

A framework is similar to an application programming interface (API), though technically a framework includes an API. As the name suggests, a framework serves as a foundation for programming, while an API provides access to the elements supported by the framework. Also, a framework may include code libraries, a compiler, and other programs used in the software development process. There are two types of frameworks which are: Front end and back end.

The front end is “client-side” programming while a back end is referred to as “server-side” programming. The front-end is that part of the application that interacts with the users. It is surrounded by dropdown menus and sliders, is a combo of HTML, CSS, and JavaScript being controlled by our computer’s browser. The back-end framework is all about the functioning that happens in the back end and reflects on a website. It can be when a user logs into our account or purchasing from an online store.

Why do we need a software development framework?

Software development frameworks provide tools and libraries to software developers for building and managing web applications faster and easier. All the frameworks mostly have one goal in common that is to facilitate easy and fast development.

Let’s see why these frameworks are needed:

  1. Frameworks help application programs to get developed in a consistent, efficient and accurate manner by a small team of developers.
  2. An active and popular framework provides developers robust tools, large community, and rich resources to leverage during development.
  3. The flexible and scalable frameworks help to meet the time constraints and complexity of the software project.

Here are some of the importance of the framework. Now let’s see what are the advantages of using a software framework:

  1. Secure code
  2. Duplicate and redundant code be avoided
  3. Helps consistent
  4. developing code with fewer bugs
  5. Makes it easier to work on sophisticated technologies
  6. Applications are reliable as several code segments and functionalities are pre-built and pre-tested.
  7. Testing and debugging the code is easier and can be done even by developers who do not own the code.
  8. The time required to develop an application is less.

I chose this topic because I have heard many times about software frameworks and was intrigued by learning more about them, what they are, how they work, and what their importance is in the software development field. Frameworks or programming languages are important because we need them to create and develop applications.

Software Development Frameworks For Your Next Product Idea (

Framework Definition (

From the blog CS@Worcester – Software Intellect by rkitenge91 and used with permission of the author. All other rights reserved by the author.

Blog Post 5 – SOLID Principles

When developing software, creating understandable, readable, and testable code is not just a nice thing to do, but it is a necessity. This is because having clean code that could be reviewed and worked on by other developers is an essential part of the development process. When it comes to object oriented programming languages, there are a few design principles that help you avoid design smells and messy code. These principles are known as the SOLID principles. These principles were originally introduced by Robert J. Martin back in 2000. SOLID is an acronym for five object oriented design principles. These principles are:

  1. Single Responsibility Principle – A class should have one and only one reason to change, meaning that a class should have only one job. This principle helps keep code consistent and it makes version control easier.
  2. Open Closed Principle – Objects or entities should be open for extension but closed for modification. This means that we should only add new functionality to the code, but not modify existing code. This is usually done through abstraction. This principle helps avoid creating bugs in the code.
  3. Liskov Substitution Principle – Let q(x) be a property provable about objects of x of type T. Then q(y) should be provable for objects y of type S where S is a subtype of T. This means that subclasses can substitute their base class. This is expected because subclasses should inherit everything from their parent class. They just extend the parent class, they never narrow it down. This principle also helps us avoid bugs.
  4. Interface Segregation Principle – A client should never be forced to implement an interface that it doesn’t use, or clients shouldn’t be forced to depend on methods they do not use. This principle helps keeps the code flexible and extendable.
  5. Dependency Inversion Principle – Entities must depend on abstractions, not on concretions. It states that the high-level module must not depend on the low-level module, but they should depend on abstractions. This means that dependencies should be reorganized to depend on abstract classes rather than concrete classes. Doing so would help keep our class open for extension. This principle helps us stay organized as well as help implement the Open Closed Principle.

These design principles act as a framework that helps developers write cleaner, more legible code that allows for easier maintenance and easier collaboration. The SOLID principles should always be followed because they are best practices, and they help developers avoid design smells in their code, which will in turn help avoid technical debt.

From the blog CS@Worcester – Fadi Akram by Fadi Akram and used with permission of the author. All other rights reserved by the author.

Best Docker Tutorial IMHO

While in the process of trying to learn Docker as part of my Software Architecture class, I watched 5 YouTube videos. YouTube has been a real benefit to me for learning college level science courses. I have had good luck with Biology, Organic Chemistry, and even Statistics tutorials, but have not had much luck so far with Computer Science videos. There are some good ones out there, though, and “Docker Tutorial for Beginners – A full DevOps course on how to run applications in containers [1] is certainly one the best I have seen.

This 2-hour course by Mumshad Mannambeth covers all the bases in a clear and interesting manner and is enhanced with well thought out structure and graphics. The lectures are accompanied by direct access to a full-fledged Docker environment through labs [2] on a website

The course is broken up into sections exploring the definitions of containers, images, Docker itself, why you need it, and what you can do with it. He then goes on to explain the basic commands needed to create a docker image, how to build and run it in a container, the basic concepts of Docker Compose, YAML, and DockerFile.

He leaves the technical details of installing Docker Desktop for Windows and the MAC until later in the video, giving more time up front to clearly describe why you want to use it, what it accomplishes for the software development community, and how containerization is the future of enterprise level software. These installation sections are also well done but are not relevant for those who already have docker installed, or for those without the time to build their own environments. The tutorial and accompanying interactive quizzes on the right side of the site, are resources I will come back to in the future, because of their depth and clarity.

He then allocates about 40 minutes going into docker concepts and commands in depth and follows up with a history and description of the importance of continuous integration using container orchestration tools like Swarm and Kubernetes. He clearly lays out the architecture of a system that is complex, distributed, fault-tolerant, and easy to implement. He details the importance of DevOps, where the design, development, testing, and operations teams are seamlessly connected and have a symbiotic relationship. This makes everyone’s jobs easier, cuts down on departmental finger pointing when things go wrong, and brings product to market much quicker and with less bugs shipped.

He also covers the following areas:

1. Layered architecture

2. Docker registry

3. Controlling volumes

4. Port forwarding

5. Viewing logs

6. The advantages of container architectures over Virtual Machines and Hypervisors.

7. KubeCtrl

I was pleasantly surprised to have found this. Maybe I should give the computer science YouTube community more credit.


[1]  YouTube link:

[2]  Tutorial links:


(1) Why do you need Docker?

(2) Container orchestration tools

(3) Layered architecture

(4) Hand-on Lab

From the blog cs@worcester – (Twinstar Blogland) by Joe Barry and used with permission of the author. All other rights reserved by the author.