Category Archives: Week 9

Be the worst

Be the lion’s tail rather than the fox’s head!

Tractate Avot

I found this pattern interesting, but at the same time, a pattern that has a lot of space to describe from different perspectives. “Be the worst” is the title that when you see it at the first glance doesn’t sound really good. It grabs your attention, and also sounds like a piece of advice.

In every work environment, there are employees who are not at the same level of knowledge. Someone is good at factual and theoretical knowledge, someone is good at accessing and applying information, another one is good at coding, someone in communications skills, and someone in the way of organizing the work. A company might have also a person that is capable of doing all of these, but also it might have people that can’t do any of them.

There is an expression of Mahatma Gandhi. It says: “Practice as you are the worst, perform as if you are the best!”  This quote looks the same as the pattern that Tractate Avot talks about. Even if you are the worst on the team, you still need to practice and learn from the best ones. But at the same time, whatever you learn, you have to share and perform as the best thing you can do.

The solution that the author gives here, is to be surrounded by developers who are better than you. Based on him, also the solution is finding a stronger team where we can be the weakest member. Okay, when I read in this context it looks like advice that I can learn from. But at the same time looks a little challenging to me. Being in an environment where you are the worst of all the employees, firstly could make you feel as inferior. You might feel uncomfortable in a place where you know less than anyone. I think that nowadays anyone has the possibility to be informed about something is not sure about. Nowadays everything is well explained through information, tutorial videos where anyone can learn something new or when you are not really sure how something works. In this way, it might not be necessary to feel like someone that is just “the worst”. Of course, we can look for help when we are not sure about something or for learning something new and unseen before. But I don’t agree with being the worst.

To sum up, the point where I totally agree with Tractat Avot is when he says that one thing that can help us maintain a more accurate self-assessment, will be collaborating with great developers. Only in that way, we can have a successful software project on our teams.

From the blog CS@worcester – Xhulja's Blogs by xmurati and used with permission of the author. All other rights reserved by the author.

Use the Source

The pattern I am writing about today is use the source. The problem is that an apprentice can be using bad habits, but he will not know if he is not reading someone else’s code. The author specifically mentions that the apprentice should look at the source code of the tools he is using. One reaction I had to this was an interview experience I had. I solved a coding problem, and my interviewer asked my what the time complexity was. I was using the Java library method Arrays.sort() but I did not look up what the time complexity of that library method. I eventually figured it out. But it made me realize that I should know more about the libraries that I am using in my code.

The author also mentions the importance of refactory code. When reading somebody elses codebase. The craftsmen should look for ways to improce the codebase, and understand why the project is built in the way that it was. This experience could also lead to apprentice refactoring his own code. The author also mentions code reviews. This is something I am really looking forward to as a professional Software Engineer. When we learn the fundamentals school, we are usually working alone, or in a small group. But in the professional world, we collaborate. This will give myself, as an apprentice an oppurtunity to learn from the source code of my coworkers. Open source is another great way for new craftsmen to gain this experience.

The author recommends that the apprentice contribute to open source, so he can read source code, and learn from it. He also mentions that if a programmer can understand a program from the source code, that means he is a good craftsmen. This makes a lot of sense to me, and understandy the code base is typically what a programmer does on their first day or days on the job. In order to contribute to a project of any size, the craftmen should know what the code currently does. So I agree that using the source is an incredibly important skill.

From the blog CS@Worcester – Jim Spisto by jspisto and used with permission of the author. All other rights reserved by the author.

REST API Design

For the past few weeks, I have been working on class activities relating to REST API Design. I wanted to research more about it before continuing to homework assignments utilizing it.

This led me to the website: https://www.mulesoft.com/resources/api/what-is-rest-api-design.

REST API is an application programming interface that uses the constraints of REST, or Representational State Transfer. REST API takes advantafge of HTTP which means that developers do not need to install additional software to utilize it. There are six main constraints of REST API design: client-server, stateless, cache, uniform interface, layered system, and Code on Demand.

Client-server constraint: the client and the server should be separate from each other and be able to change separately. This would allow changes to be made to a mobile application without impacting the server, and also for changes to be made to the database or server without affecting the mobile application.

Stateless constraint: REST APIs are stateless, meaning that each call can be made independently and have enough data to complete itself. REST APIs should only rely on the data that is given in the call itself. Servers do not store identifying information; instead the call has that information, be it an access token or user ID. This helps make the API more reliable because it does not need to rely on multiple calls to the server to create an object.

Cache constraint: REST API should encourage cacheable data to be stored.

Uniform interface constraint: the uniform interface should provide a standard way to communicate between the client and server. The uniform interface should also allow the evolution of the application without the application intertwined too much with the API layer.

Layered system constraint: having a layered system helps shield differently accessed components from one another. It allows for systems to be moved in and out of the architecture which can help as technology evolves. It can also help with security, as it can help with attacks at a proxy layer or other layers before it reaches actual server architecture.

Code on Demand constraint: this constraint is optional, but it allows for code or applet to be sent out through the API, meaning the server can add information to the code.

I chose this source because I wanted to read about how REST APIs are broken down, and how these constraints are helpful. This source helped me understand how the individual evolution of the servers and the clients are important, and I will consider this information for when I need to decide if REST API is the type of API I should utilize for a future project.

From the blog CS@Worcester – CS With Sarah by Sarah T and used with permission of the author. All other rights reserved by the author.

SOLID Principles

Hello and welcome back to another week of my blog! This week I want to talk about SOLID design principles since it is important for other programmers to read and understand your code so you can collaboratively work together on it. Having code that is not clean and hard to understand will ultimately hinder you in the long term. Having clean code also makes your code easier to write and understand as well. The term SOLID stands for multiple things: The Single Responsibility Principle, The Open-Closed Principle, The Liskov Substitution Principle, The Interface Segregation Principle, and The Dependency Inversion Principle. These principles were made by a Computer Scientist named Robert J. Martin who is also the author of Clean Code. I’m reading that book for CS-348. 

Starting with the Single Responsibility Principle, this principle states that a class should only have one responsibility. Furthermore, it should only have one reason to change. For example, there is a program that calculates the area of shapes. There would be classes that define the shapes themselves (ex. Class Square) and a class that calculates the area of the shapes (ex. Class ShapeArea). The ShapeArea class should only calculate the area of the shapes. 

The open closed principle means that classes should be open for extension and closed to modification. This means that programmers should be able to add new features to the code without touching the existing code because touching the existing code could create new bugs. 

The Liskov substitution Principle states that subclasses should be substitutable for their base classes. This means that if class B is a subclass of class A, we should be able to pass an object of class B to any method that expects an object of class A and the method should not give any weird output in that case. 

The interface segregation principle states that larger interfaces should be split into smaller ones. By doing that, we can ensure that implementing classes only need to be concerned about the methods that are of interest to them. 

The last one is the Dependency Inversion principle. The general idea of the principle is that high level and complex modules should be easily reusable and unaffected by changes in low level utility modules. To do this, there needs to be an abstraction between the high level and low level modules so they are separated and you can tell them apart. 

Those are all the SOLID principles. Thank you for reading this blog post!

https://www.bmc.com/blogs/solid-design-principles/#

From the blog Comfy Blog by and used with permission of the author. All other rights reserved by the author.

The elements of Servers, Architectures, and States

Over the course of the semester, we’ve discussed servers, databases, and monolithic and microservice architectures. We’ve discussed and are currently working on rest APIs which allow us to send and receive information to and from the Internet. While working with rest APIs and doing research on my own, there are elements that exist in what we’re doing that we have yet to discuss. These different elements are paraphrased from parts of the articles linked below.

“REST” stands for “Representational State Transfer”. It is a style of architecture that allows proper interaction with web services that are “RESTful”. In a microservice architecture design, there exist different elements in the software, e.g. one service could manage a consumer’s payment, whereas another service can handle a company’s product/service. In this separation of different systems, there exists a need to communicate these different services together. 

REST APIs function as the glue between different services. However, this requires further inquisition: why would we use different services? Wouldn’t the separate systems just use more memory by saving redundant information? This question brings us back to the “S” in “REST”, namely “state”. When sending information from or to another service, an object’s data is not saved in the service as the request, more specifically, the information in each request is “separate and unconnected”1 This concept is called “statelessness”.2

Another aspect about microservices is serialization. Serialization is the act of transferring an object with variables and their respective mutators and accessors into a stream of bytes. This stream can be of binary data or a string. Information about an object that is passed through various services requires serialization, which can then be compared with other streams for object comparison, and eventually deserialization.3

We’ve previously discussed docker, characterizing it once as “lightweight”. These aforementioned aspects are the ingredients that can allow containerization and their containerizers such as Docker to be so lightweight.

I decided to talk about this subject because state / statelessness, serialization, and architecture design will be an important part of a future job. A lot of my research so far is applicable to what we’ve been doing, and I think the concepts and their implementations are, to me, novel, interesting, and important. This information is to my current understanding, and it has the risk of being incorrect, due to me still learning about them. However, I still decided to talk about these topics because understanding the anatomy of how everything comes together and operates is interesting, especially with what we’ve done in class.

Links:

  1. https://www.redhat.com/en/topics/api/what-is-a-rest-api
  2. https://www.redhat.com/en/topics/cloud-native-apps/stateful-vs-stateless
  3. https://dev.to/njnareshjoshi/what-is-serialization-everything-you-need-to-know-about-java-serialization-explained-with-example-9mj

From the blog CS@Worcester – Chris's CS Blog by Chris and used with permission of the author. All other rights reserved by the author.

Blog post 4 – Semantic Versioning

One of the more interesting topics that we covered in class was semantic versioning. I found it interesting because it is something that I see all the time but had no idea what it meant. After reading over the documentation, I’ve learned that semantic versioning is a set of rules that dictate how version numbers are assigned and incremented. Semantic versioning was proposed as a solution to dependency hell, which occurs when you version lock or when version promiscuity prevents you from easily and safely moving your project forward.

Semantic versioning works in three parts, X, Y, and Z. They are usually written as X.Y.Z, as that is the form semantic versioning must take. Each component says a different thing about the version. The X states what the current major release is, the Y states what the current minor release is after the last major release, and the Z states what the current patch release is after the last minor release. What do a major release, minor release, and patch release mean?

A major release is the first part of the semantic versioning framework. It goes at beginning of the version number. A major release occurs when you make incompatible API changes. The changes must be backward incompatible in order to be considered major. A major version zero is for initial development, and anything may change at any time. When a new major update is released, the minor, and patch version numbers must be reset back to zero.

A minor release is the second part of the semantic versioning framework. It goes in the middle of version number.  A minor release occurs when you add functionality in a backward compatible manner. A minor release needs to be incremented every time any public API functionality is marked as deprecated. Minor releases could also be incremented if substantial new functionality or improvements are introduced within the private code, and it could include patch level changes. When a minor update is released, the patch version number must be reset to zero.

A patch release is the third and final part of the semantic versioning framework. It goes at the end of the version number, and it refers to an update that focuses either exclusively or primarily on bug fixes. A bug fix is a change made to the code to correct incorrect behavior. A patch release does not add any new features, it just modifies existing code to fix errors or make the code run the way it was intended. All the bug fixes must be backward compatible.

https://semver.org/

From the blog CS@Worcester – Fadi Akram by Fadi Akram and used with permission of the author. All other rights reserved by the author.

Anestiblog #3

This week I read a blog post about eight ways to make software development go by faster, and I thought it was an interesting read to share with everyone. The blog starts off describing time as the most valuable resource in the software world, and how faster is better. The blog then goes into the different options of speed such as marathons, extreme sprints, moderate sprints, and intervals. You can select between the four for whichever project you are doing. The eight ways are shown as skills, experience, system complexity, technical debt, refactoring, slow automatic tests, overtime/deadlines, and passion. It concludes by describing software development speed as complex, and that it has no easy solution, but it does have a lot of different solutions depending on how you work. I selected this blog post because this class is a software development focused class, and I see my future in that area, so I want to know the most about it.I think this blog was a great read that I recommend for many reasons. A huge reason I selected this blog is because I want to become a software developer myself in the future, so it was a read that I could enjoy. It is good to read the blogs on things you care about because it makes the recommendation much more credible. Another reason I selected this blog was because it does a good job of showing the most important part of a software developer job. How to make more out of less time. It gives a huge number of different ways to solve the problem, and they all are viable answers. The last reason I will be going over is that it does a good job of tempering expectations. Time management is an important resource when it comes to software development, but it is not just some beginner’s problem. Experts still struggle with it occasionally today, so do not expect it to be an easy problem to answer. I learned about how important speed is to software development, and how many factors contribute to it, and how complex it is.This material affected me hugely because it showed how important conserving speed to save money is, and how I can use certain methods in the future to make projects go faster. I will take all of this knowledge with me as I continue in my career. Everyone who wants a career in software development needs to read this because it shows how time literally is money.

link : https://www.apptio.com/blog/speed-in-software-development/

From the blog CS@Worcester – Anesti Blog's by Anesti Lara and used with permission of the author. All other rights reserved by the author.

REST APIs

A while ago, I introduced the concept of API in the information technology industry. In addition, there are 3 different types of Web Services APIs which are REST APIs, SOAP APIs, and RPC APIs. Among them, REST APIs is no doubt the most popular API currently. REST APIs, also known as RESTful APIs, stands for Representational State Transfer. It is based on four different HTTP commands which are GET, PUT, POST, and DELETE.

When a client request is made via a RESTful API, it transfers a representation of the state of the resource to the endpoint. Then the information is delivered in one of several formats via HTTP: JSON (Javascript Object Notation), HTML, XLT, Python, PHP, or plain text. JSON is the most generally popular due to its readability.

According to the RESTful API website, a service interface needs to apply these 6 guiding principles and constraints to be referred to as RESTful:

1. Uniform Interface

Multiple architectural constraints applying to the interface so that information is transferred in a standard form requiring:

Identification resources – resources requested are identifiable and unique from the representations sent to the client

Manipulation of resources through representations – clients can manipulate the resources via the representation

Self-descriptive messages – each resource representation should carry enough information that the client can manipulate.

Hypermedia as the engine of application state – after accessing a resource, the client should be able to use hyperlinks to find all other currently available actions.

2. Client – server

It is the architecture made up of clients, servers, and resources, with requests managed through HTTP.

3. Stateless

Statelessness means that no client information is stored between each request and that the server cannot take advantage of previously stored context information.

4. Cacheable

There are cacheable and non-cacheable responses. If the response is cacheable, the client can reuse the response data for later use.

5. Layered system

A layered system allows an architecture to be composed of hierarchical layers by constraining component behavior.

6. Code on demand (optional)

It is the ability to send executable code from the server to the client when requested, extending client functionality.

Although REST API has these criteria to conform to, it is still considered easier to use than SOAP (Simple Object Access Protocol). The reason is that while REST is a set of guidelines which is simple to implement and faster in response time, SOAP has specific requirements like XML messaging, and built-in security and transaction compliance that make it slower and heavier.

Overall, the point of this blog is to introduce you to REST API, REST is just a term indicating the constraints applying to the API. In order to understand the power of REST, I would suggest going to create one and see how simple and convenient it is.

From the blog CS@Worcester – Vien's Blog by Vien Hua and used with permission of the author. All other rights reserved by the author.

week-9

Hello, week-9. I want to post a blog to quickly review the API topic to learn more about REST calls. I got confused; I researched about it. It has the Understanding And Using REST APIs.

 

What is a REST API

 

API (Application Programming Interface) – A set of rules allows programs to support any other. The developer creates the API on the server and enables the client to speak to it. 

The REST (Representational State Transfer) determines how the API. It is a set of rules that developers follow when they create their API. One of the rules states that one should get data (called a resource) when linking to a specific URL. Each URL made a request, while the data sent back to is called a response.

The Anatomy Of A Request #

It’s important to know that a request with four points:

  • The endpoint
  • The method
  • The headers
  • The data (or body)

 

The endpoint – URL that requests for (root-endpoint/?). The root-endpoint is the starting point of the API that is ordering.

The path determines the resource request. For example, it is like an automatic answering machine. That asks to press 1 for service, press 2 for another service, 3 for yet another service, and so on.

The Method

The method is the type of request sent to the server:

  • GET – Request to get a resource from a server. It will perform a `GET` request; the server looks for the requested data and sends it back.
  • POST – Request to create a new resource on a server. It performs a `POST` request, the server creates a new entry in the database and tells whether the creation is successful.
  • PUT & PATCH – Requests to update a resource on a server. If performing a `PUT` or `PATCH` request, the server updates an entry in the database and tells whether the update is successful.
  • DELETE – Request to delete a resource from a server. If performing a `DELETE` request, the server deletes an entry in the database and tells whether the deletion is successful.

These methods provide meaning for the request made. Perform steps: Create, Read, Update and Delete (CRUD).

The HEAD: it used to provide information to both the client and server. It has many purposes, such as authentication and giving information about the body content. It can find a list of valid headers on MDN’s HTTP Headers Reference.

The Data – contains information sent to the server. It only used POST, PUT, PATCH, or DELETE requests.

From the blog Andrew Lam’s little blog by and used with permission of the author. All other rights reserved by the author.

Docker

This week I researched more about docker to know what it exactly is and what are the applications of it. I had never heard or used docker before I joined this class and since we use docker for almost every class I was just curious to learn more about it and from the things learned during the class, it looks like docker is one of the useful software and very versatile in the field of software development.

Docker is a set of platforms as a service product that uses OS-level virtualization that delivers different software in packages and they are called containers or Docker can also be referred to as an open-source platform for building, deploying, and managing containerized applications. Since docker is open-source, it enables developers to package their application into containers, these containers simplify the delivery of distributed applications and has become popular as different organizations shift to could-native development and hybrid multi-cloud environments. A docker file contains simple textile that starts with every docker container and contains instructions for how to build a Docker container image, that file automates the process of Docker image creation. Docker image contains executable application source code as well as all the necessary libraries, tools the application code needs to run the container properly. Docker images are made up of layers, and each of those layers corresponds to the version of the image. Docker containers are the live, runner instances of the Docker images and this helps users to interact with them, thus one can adjust their setting according to their preferences. Docker hub is a public repository where Docker images are stored and can be used by all Docker Hub users. The advantages of using Docker are they are cost-effective with fast Deployment, Able to run anywhere, Flexibility, and so on, the disadvantages are its advances quickly thus but lack documentation that makes some developers hunt for information which then wastes time for those developers, some developers find switching to docker is a quite a steep learning curve thus making it hard to understand for some people.

I choose this article because it explains Docker and its tools in the simplest way which helped me understand more about docker since it’s the first time I have ever used this application. It explains the advantages of using docker and this article provided clear information on different aspects of docker such as Docker File, Docker images, Docker containers. And since its one of the most used application throughout the software developers it will be important for me too in the future as a software developer.

Article: https://www.ibm.com/cloud/learn/docker

From the blog CS@Worcester – Mausam Mishra's Blog by mousammishra21 and used with permission of the author. All other rights reserved by the author.