Category Archives: CS-343

Two sides of web development coin

Frontend

A front-end developer takes the mockup of a website that a web designer has made and converts it into a functioning web solution that users can interact with. To do this, the developer divides the mockup into individual web components, such as buttons, sliders, photos, menus, forms, and so on, and then uses JavaScript to give these pieces specific behavior. Frontend developers achieve this split into discrete web page components using a variety of methods, with HTML: A Web Page’s Skeleton. Hypertext Markup Language is the abbreviation for HTML. It offers the framework for a website. To give a web page a certain structure, HTML includes several element identifiers called tags. Every web element has its own tag and place on the page. CSS: The flesh and blood of the web page. CSS is a technology that enables frontend developers to describe the style of each web page component using certain properties organized into rules. JavaScript is the most popular. This is one distinction between a front-end and back-end developer, each of whom works with a separate technological stack.

Backend

The engine of a website is the back end. On a site with simply a front end, click the Submit button under a form. Nothing is going to happen. Your website won’t be able to add new users or dynamically update content. The back end of a website comprises three important components. A database is a collection of data tables that are linked. Backend developers use a variety of database management systems (DBMS) to handle databases (Database Management Systems). MySQL and Oracle are two examples. The database is stored on a server, which is a computer. A web server is a particular program that operates on a physical server that sends data from a database and receives requests from a website. Apache HTTP Server is one example. The primary goal of a backend developer is to guarantee that the data flow is seamless and error free.

Conclusion

Web development is a multi-faceted process with a lot of activity components. While web developers have a variety of responsibilities, we may divide them into two groups. Frontend developers are in charge of the user-facing aspect of a web application. Backend developers, on the other hand, work with the hidden component of the system, which includes a database and a server. These two large teams collaborate to build aesthetically appealing and engaging websites and online apps.

Why this topic?

In my mind, the distinction between frontend and backend was hazy. I was aware of them, but I had the impression that if someone asked me what the difference between frontend and backend was, I would be unable to respond. So far, the best way to explain the distinction between frontend and backend is to use an automobile as an example. The frontend of an automobile should be created and shaped attractively, but without the engine, the car is meaningless. The engine, on the other hand, is the backend that brings all of the beautiful designs and colors to life on the road.

Link: https://www.psd2html.com/blog/a-frontend-vs-backend-developer-two-sides-of-the-web-development-coin.html

From the blog cs@worcester – Dream to Reality by tamusandesh99 and used with permission of the author. All other rights reserved by the author.

Design Smells

When writing code, it is always important to write as cleanly and efficiently as possible. Code can be sometimes confusing and hard to read to begin with, and writing poor code makes it much harder. We measure the quality of code using design smells. Design smells are different metrics such as rigidity and fragility which measure how poor a certain part of design is.

Yonatan Fedaeli writes in his article “Software Design Smells” that there are seven main design smells: rigidity, fragility, immobility, viscosity, needless complexity, needless repetition, and opacity. These smells exist in all code to a certain extent, but some code has much less smells than others. The goal is to limit each of these smells as much as possible.

Rigidity is software’s inability to change. This makes it very difficult to add new features to a project. An easy way to fix this design smells is by designing each section of software independently of each other since the more interdependence a project has the more rigid it will be.

Fragility is a software’s tendency to break whenever a change is made. This is similar to rigidity, but different in that it is not difficult to make the change but difficult to make the change work. Any small change can totally break the entire project in a very fragile design.

Immobility is a project’s inability to reuse components. When a project is immobile, each piece of the project is so dependent on the rest that is almost impossible to use it in a different context. When writing a large project it is important to reuse code so that you are not wasting time writing the same thing multiple times.

Viscosity is how closely a project follows the original design goal. When a project is very viscous each new feature either preserves a complex design or does not follow the design goal. When a project’s environment is viscous it means that the project exists in a complex work environment. It may be difficult to build or test and you may spend extra time trying to get the project to run.

Needless complexity is when a project is more complex than it needs to be. Software is already complex to begin with, and it is important to write clear code so that people reading it do not need to spend extra time trying to figure out what it does.

Needless repetition is when code is repeated multiple times when it could be abstracted. This is similar to immobility in that code is not reused, but different since this smell focuses on abstraction and how code does not need to be rewritten many times.

Opacity is a project’s inability to be understood by the developer. This is similar to needless complexity in that an opaque project can be needlessly complex, but includes much more than that. Usually an opaque project has a poor initial design which is why it is difficult to understand.

When writing code it is important to write clean, efficient code. Keeping design smells in mind when coding will keep your projects clean and good quality.

From the blog CS@Worcester – Ryan Blog by rtrembley and used with permission of the author. All other rights reserved by the author.

Blog Post Week of 11/22 – How Important Is Bash Scripting For Programmers?

Most of us in CS-348 have been introduced to the terminal and bash scripting from previous classes or using Linux on their laptop. I haven’t used linux for any major projects but I installed and dual boot linux on my laptop (windows and linux installed on two different ssd) and use it for small side projects, or to learn new things. It’s for this reason I use a laptop that’s a little bit older so that everything inside is easily accessible and I can modify it however I like. The terminal allows a user or programmer to control their computer and make requests to the OS through keyboard commands rather than interacting with the user interface. One advantage with using the command line vs GUI is it keeps a history log of every command that was executed which makes doing things again much quicker and more organized, and it helps if you need a reminder for what you did a few days ago. You can type individual commands in the terminal prompt one at a time or you can write a bash script, which is a text file with the .sh extension that contains multiple bash commands to be executed at once. 

How and what do programmers do with bash scripts exactly? I found this blog post to be a great introduction for programming students like myself to practice, with tasks that relate to real IT jobs we may hold one day: https://linuxhandbook.com/bash-automation/. The bash is a great tool for running tasks that are boring and repetitive without much repeated effort. The first example shows how a system administrator can use a bash script to create a new user on multiple servers. This example shows how you can even use bash scripts to enter data from the command prompt and use this data to create the new user. The third example shows how you can monitor disk space using a bash script and send an email warning whomever is using the computer that disk space is running low and by how much. These are basic examples that show how simple bash scripts can save you a lot of time in the long run. 

Bash scripting is something most of us will come across at some point in our programming careers and it’s important to get a handle on the basics. Automating mundane tasks will make work easier and show you have a wide variety of skills.

From the blog CS@Worcester – Site Title by lenagviaz and used with permission of the author. All other rights reserved by the author.

InfoSeCon Day2 Keynote

This is the Keynote speaker for day two of Raleigh ISSA’s 2021 Information Security Conference. In my previous post about the Keynote of Day 1 Keynote Speaker I detailed my attachment to the Association and my appreciation for the monthly resource of meetings that I used to attend when I lived in Raleigh. The presentation for Day 2 is a little different in that it’s not themed around Halloween or any comedic points.

The speaker, Armando Seay, is a Co-Founder & Board Member at Maryland Innovation and Security Institute which is an organization that assists clients in finding security solutions and applying them to their IT infrastructure. He begins the presentation discussing the importance of understanding security standards and practices around the world and how young professionals in the IT field get their starts in the US, Ukraine, Israel, and other various countries. He describes his organization as “geographically boundless” due to the fact that they assist clients all over the world.

The importance of being able to work with people around the world is something that I’m finding to be more and more important as I travel around the US from one cultural hub to another. I grew up in Columbia, South Carolina and then I joined the Coast Guard after High School and now I’ve moved to Worcester. Each of these places are full of different demographics and being successful in each of them required working with different people from different backgrounds and finding common ground to build upon. I have friends that were born in Haiti, China, West Africa, Europe, Canada, and they all come from backgrounds that are exceptionally different from my own. Engaging with people from different backgrounds allows one to broaden their perspective and find solutions that they wouldn’t normally find on their own.

Armando goes on to talk about the Academic Partners and what they call their “Partner Ecosystem,” I believe that POGIL team structures kind of works as a microcosm of these in that when groups are switched around we form partnerships that we can maintain and use to better understand our assignments and accomplish goals.

One thing that I found particularly interesting was his discussion about Maritime Attacks, that is to say attacks on the software that ships use to either catalogue their cargo, their navigation, or even their movement. It’s important to be aware of software vulnerabilities and best practices so that we aren’t exposing others to cybersecurity vulnerabilities due to our negligence. Armando closes his presentation discussing Zero-Trust policies and how critical it is to verify any process or user in your network and everything you do before you entrust any security clearance to them.

From the blog CS@Worcester – Jeremy Studley's CS Blog by jstudley95 and used with permission of the author. All other rights reserved by the author.

.JS File

This week, I have a chance to work with JavaScript files of a backend in my homework-5. However, there were many syntax that I had never seen, so I got no sense when I was writing code. Thus, I looked for some resources to satisfy my curiosity and to get a better understanding of JavaScript.

JavaScript Tutorial is one of the resources that I find useful because it provides all the information related to JavaScript. First of all, from the website, I know that JavaScript is one of the most popular programming language of the web. It is used to program the behavior of a web page. Next, according to the website, I can look up and learn more about new syntax or new definitions of JavaScript respecting to my homework 5. For example, the keywords “let” and “const”, Array Iteration, Async/Await appearing in my homework make me question that what they are for and why they are there in the .js files.

However, based on the information from the website I can answer those questions. It said that “let” is used to defined a variable, whereas “const” is used to define a constant variable which cannot be reassigned. Every “let” and “const” variables cannot be redeclared and have block scope. For the Array Iteration, JavaScript has many methods to operate on every item of an array, such as map(), filter(), reduce(), every(), especially forEach() which I have met in my homework. forEach() is used to apply a condition or a function inside the parenthesis to each element of an array. For Async/Await, it said that “async” is used to make a function return a promise, and “await” keyword is only used inside the “async” function. “await” will make a function wait for a promise. So, what is a promise in JavaScript? Promise is a JavaScript object that links producing code and consuming code. The below picture is an example given by the website to explain clearly what producing code and consuming code are.

In conclusion, in my opinion, this website is one of a good resources to help me learn more about JavaScript. It includes all the information that I wanted to know and it also explains clearly every new definition with easily understandable examples. Thanks to this website, I am getting more familiar with JavaScript and get a better understanding of the code given in my homework-5. So, I believe that this website will give me a good foundation to work with my homework-5 or with any .js files.

From the blog CS@Worcester – T's CSblog by tyahhhh and used with permission of the author. All other rights reserved by the author.

Thinking about software testing

blog.ndepend.com/10-reasons-why-you-should-write-tests/

For as long as I have been aware of it, I have been skeptical of the value of software testing. It has always struck me as unnecessary busywork, mostly because that is how writing them for my classes feels (granted, that’s generally how writing code for any class feels in my experience). Either the program works or it doesn’t, right? Why bother writing a test when you could use that time to tighten the ratchet on the code you’d otherwise be testing, or even move on to something else?

In an attempt to broaden my horizons, I sought out some arguments in favor of testing. One idea I found, from Avaneesh Dubey (which he discusses in the above article), is probably the one I personally find the most compelling. He argues that the hallmark of a poorly constructed test case is essentially one that is too narrow in its scope or the functionality that it covers. Proper tests, he argues, must reflect “usage patterns and flows.”

Jumping off of this, I would articulate this slightly differently. I think that proper testing methodology would necessarily force developers to be aware of the boundaries they want to encapsulate between. For example, it would be kind of absurd to write tests to make sure a factory class works correctly, because whether or not you’re even using the factory paradigm is almost certainly too technical for non-technical product managers to care about. My understanding of testing is that it’s primarily a way for this kind of person to make judgments about the development process independently of the actual developers.

When you write software tests, you are, or at least should be, asking yourself questions about the high-level flow of the program – what it’s actually doing in physical reality rather than very tiny implementation details – and that is ultimately where your head should be at all times during the development process, in my opinion.

Though obviously, test writing is an important skill for actual work in the industry, I had no intention of ever writing any tests for my personal projects. Now, I’m not really sure that I’m sold on it for personal use, and I’m still a little skeptical about the efficacy of test-driven development in single person projects, but I think it might be of some use to me. In particular, I hope it can help me make some sense of the WebGL code I’m planning to write for a project in the near future, which is certain to contain many fine technical details that can quickly become a headache if not managed properly.

From the blog CS@Worcester – Tom's Blog by Thomas Clifford and used with permission of the author. All other rights reserved by the author.

API

For this week’s blog, I was looking more into API since we covered this in the class to know more about it. Application programming interfaces (API) is a connection between computers or between computer programs. It is a type of software interface, offering a service to other pieces of software. API enables companies to open their applications data and functionality to external third-party developers, business partners, and internal departments within their companies. This allows service and application to communicate with each other and leverage each other’s data and functionality through a documented interface, developers do not need to know about how the API is made because they will simply use the interface to communicate with an application and web services.

How an API works: API stays between an application and web server, acting as an intermediary layer that processes data transfer between systems.  APIs are sometimes thought of as contracts, with documentation that represents an agreement between two parties, if one party sends a remote request structured a particular way, this is how party two’s software will respond.  For example, imagine a medical equipment distribution company. The distribution company could give its customers a could app that lets the hospital’s employee check a certain equipment availability with the distributor. The app could be expensive to build, limited by platform, and require long development times and ongoing maintenance. This has several benefits like it lets the customer access data via an API which helps to check information about their inventory in a single place saving time, The distributor can make changes to its internal systems without impacting customers, with a publicly available API it could result in higher sales for the business.

In short, APIs lets you open up access to your resources while maintaining security and control. How you open access and to who is up to you. There are four main types of API those are Open API, Partner API, Internal API, and Composite API. Open APIs are open-source application programming interfaces you can access with HTTP protocol. Partner API are application programming interfaces exposed to or by strategic business partners. Internal APIs are application programming interfaces that remind hidden from external users. Composite API combine data or service APIs. I used this article because it explains API in an easy way, explains about it giving an example which then makes it’s more clear. And in today’s world innovation is very important as technology is increasing day by day so knowing about API is helpful in the future as it helps in fast innovation.

From the blog CS@Worcester – Mausam Mishra's Blog by mousammishra21 and used with permission of the author. All other rights reserved by the author.

Understanding RESTful APIs and what makes an API “RESTful”

This week, I wanted to solidify my understanding of the concept of REST or RESTful APIs, what they are, and how they work. There is a plethora of information on this topic but I’ll be referring to a blog post by Jamie Juviler and a website on REST API that I found helpful.

Before we get into what REST is, let’s first go over what an API is. API stands for application programming interface, and they provide a way for two applications to communicate. Some key terms include client, resource, and server. A client is a person or program that requests the API, to retrieve information, a resource is any information that can be returned to the customer, and a server is used by the application to receive the requests and maintain the resources that the client is asking for.

REST or RESTful is an acronym for Representational State Transfer and is a term used to describe an API conforming to the principles of REST. REST APIs work by receiving requests and returning all relevant information in a format that is easily interpretable by the client. Clients are also able to modify or add items to the server through a REST API.

Now that we have an understanding of how REST APIs work, let’s go over what makes an API a RESTful API.

There are six guiding principles of REST that an API must follow:

  1. Client-Server separation

The REST architecture only allows for the client and server to communicate in a single manner: all interactions are initiated by the client. The client sends a request to the server and the server sends a response back but not vice versa.

  1. Uniform Interface

    This principle states that all requests and responses must follow a uniform protocol. The most common language for REST APIs is HTTP. The client uses HTTP to send requests to a target resource. The four basic HTTP requests that can be made are: GET, POST, PUT, and DELETE.

  1. Stateless

All calls with a REST API are independent from each other. The server will not remember past requests, meaning that each request must include all information required.

  1. Layered System

    There can be additional servers, or layers, between the client and server that can provide security and handle traffic.

  1. Cacheable

REST APIs are created with cacheability in mind, when a client revisits a site, cached data is loaded from local storage instead of being fetched from the server.

  1. Code on Demand

In some cases, an API can send executable code as a response to a client. This principle is optional.

If an API follows these principles, then it is considered RESTful. These rules still leave room for developers to modify the functionality of their API. This is why REST APIs are preferred due to their flexibility. RESTful APIs offer a lot of benefits that I can hopefully cover in my next blog post. 

What is REST

What is an API?

REST APIs: How They Work and What You Need to Know

From the blog CS@Worcester – Null Pointer by vrotimmy and used with permission of the author. All other rights reserved by the author.

Training a Machine to Learn

Machine learning, its a buzzword that many people like to throw around when discussing a myriad of tech related subjects. Often times as a way to convey that this product or that service is “state of the art” and “cutting edge” because it uses machine learning. While in some instances that can absolutely be the case, in most modern applications its a bit of an oversell. Regardless, teaching a computer to do a task, and then improve at it as more data is given is no small feat and takes a lot of know how to do properly and efficiently. In this blog post I hope to share the things I have learned regarding machine learning and how it works, to hopefully help others who found themselves in a similar spot to me. Which is to say they somewhat understand how it works, but dont exactly know the specifics behind machine learning.

So how does machine learning work? In its essence machine learning is a set of techniques that give computers an ability to learn and improve at a task without being specifically programmed to do so. This is what most people familiar with the term understand it to be. But as with many things we can go deeper. The underlying science of actually having the machine learn its task is pretty complex, and I wont even pretend that I somehow taught myself the math and logic behind it in the time I spend researching for this blog post. Simply put, to start the process of machine learning you need to force a computer to run through its set task over and over again while introducing more data points and allowing it to sort them out on its own. This is not as hands off as many people may think. Just because you wrote the initial algorithm doesn’t mean you can just run it on a dataset and call it a day. Not only do you have to procure and categorize the initial training datasets, you also need to guide the machine in its initial learning process. And I’m not talking a dataset of tens or even a few hundred data points. For any machine learning algorithm to be successful you realistically need thousands of data points in each set. The datasets also need to vary in their content in order to give the algorithm the best chance at learning its task. Take for example a common use of machine learning, image identification. Lets say you want to teach a computer to identify photos that contain a monarch butterfly. You write your algorithm and now you need to procure a dataset to teach it. What kinds of photos should you train the algorithm on? The simple answer is obviously photos containing monarch butterflies. And while this approach is sound, it leaves the door open for false positives/negatives. Realistically you need a balance of photos containing monarch butterflies, photos not containing monarch butterflies, photos containing things that look like but aren’t monarch butterflies, photos containing other species of butterflies and so on. Now keep in mind you realistically need thousands of data points to teach a machine to do something, and each data point in the training dataset needs to initially be categorized by a human, So now take each of those individual data variants and multiply them by at least a 1000 and you can see where teaching a machine to do something can become very difficult. This isnt to say that you cant teach a machine to do something with much less data, it just wont be as accurate and reliable as a machine that was taught using a much much larger dataset. Now with all the data that has been given to the monarch butterfly identifying machine, it can begin to chew through it and identify data patterns between the images, and begin to form its own method of categorizing the initial training images. There is an entire post graduate field of study dedicated to what actually happens INSIDE the algorithm, so I will definitely not be covering the ins and outs of that here. Simply put, the algorithm comes up with its own way to categorize these images, and applies that method to any new data that comes in, continuously adjusting its own model with each additional data point. This in its absolute simplest form is how many machine learning algorithms work.

From the blog CS@Worcester – Sebastian's CS Blog by sserafin1 and used with permission of the author. All other rights reserved by the author.

Creating Docker Images With DockerFiles

Christian Shadis

Docker, a main focus in my Software Construction, Architecture, and Design course, is an essential tool for the modern developer to be able to completely separate the application being built from the developer’s local machine. Essentially, this allows for all the application’s dependencies and configurations to be packaged with the application and independent of the machine the application is being run on. A Docker container is built from an image, which is basically the ‘blueprint’ of a Docker container. Images are easily-reusable, and many images can be found for use on the Docker Hub. Images can be created from scratch using Dockerfiles, which I previously did not understand. In order to improve my Docker skills and gain the ability to create my own Docker images, I chose to research and write about the structure and use of Dockerfiles.

There are situations in which a developer would want to create their own Docker image – maybe they need a specific version of the Ubuntu operating system, or there are specific modifications that need to be made to that operating system before the application can run. Maybe for an application to be run, many slightly modified versions of the same container must be deployed. The developer can address these scenarios by creating a Dockerfile, which contains all software to be included in the image. Any time the image needs to be used, a container can be created with that image in a single command, preventing the necessity of importing dependencies or changing the machine’s configuration repeatedly.

 In Docker Basics: How to Use Dockerfiles, Jack Wallen first describes the necessity and use cases of Dockerfiles. He then supplies a list of keywords that can be used in a DockerFile, such as CMD, which executes a docker command inside the container, ENV, which is used to set environment variables, and EXPOSE, which is used to publish networking ports in the Docker image. From there, Wallen proceeds to demonstrate the process to create a Dockerfile from scratch in a text editor. Finally, Wallen outlines the process for building the usable image from the Dockerfile. The article concludes with a short section including a second worked example, this time building a Dockerfile for a different operating system.

 Knowing how to create containers using pre-built images from DockerHub is an important first step for a developer to get started with Docker, but the true power of Docker images is realized when the developer has the capability to create a new image with the exact configurations needed. Creating custom images is a skill I expect to use often in my development career, since a large portion of applications in development today use Docker containers.

Reference:

https://thenewstack.io/docker-basics-how-to-use-dockerfiles/

From the blog CS@Worcester – Christian Shadis' Blog by ctshadis and used with permission of the author. All other rights reserved by the author.