Category Archives: CS-343

Software Construction Log #4: Understanding Semantic Versioning

          Software releases are not always one-and-done affairs; more often than not software that we use is actively being worked on and maintained for an extended period of time well after its initial release. We can expect that software, during its support lifetime, will undergo several types of changes, including implementation of new features and various vulnerability fixes. However, it is important to note that such changes to software are as important to properly document as the technical aspects of the software, such as its use and structure as they were conceived during development. Such documentation of changes is often referred to as “Software Versioning” and it involves the process of applying a version scheme in order to track the changes that have been implemented to a project. While developer teams may develop their own scheme for versioning, some may prefer to use Semantic Versioning (https://semver.org/) as a means of keeping track of changes.

          Semantic Versioning is a versioning scheme that applies a numerical label to a project, which numerical label is separated into three parts (X.Y.Z), which then parts are incremented depending on the type of change that has been implemented. These parts are referred in the documentation as MAJOR.MINOR.PATCH and defined as:

1. MAJOR: version when you make incompatible API changes,
2. MINOR version when you add functionality in a backwards compatible manner, and
3. PATCH version when you make backwards compatible bug fixes.

https://semver.org/

The way semantic versioning works is that, when incrementing the left most part, the progress of the remaining parts is reset to zero, meaning that if a major change is implemented then the minor and patch numbers are reset to zero. Likewise, when a minor change is implemented, the patch number is reset to zero. While this scheme is relatively straightforward in and of itself, the naming convention of the numerical labels (specifically major and minor) may confuse some due to the ambiguity that such names may present. However, there is another naming convention that applies to semantic versioning, which defines the numerical version label as (breaking change).(feature).(fix).

          Though both naming conventions are used, I find the later to be far more straightforward to understand and utilize, as the names can give one a better idea of the importance of a newly implemented update. As I was researching on more resources regarding Semantic Versioning, along with the official documentation, I came across the following archived article on Neighbourhood.ie titled Introduction to SemVer. In this article, Irina goes into further detail regarding semantic versioning by explaining the naming of each component, as well as noting the difference between the two naming conventions.

          Although they go into further detail into semantic release in another article, this article sufficiently covers the fundamentals of semantic versioning. This versioning scheme is not the only way to  software development, it is still an important tool that can help in documenting a project’s history during its support lifetime and outline important changes more clearly and efficiently.

Direct link to the resource referenced in the post: https://neighbourhood.ie/blog/2019/04/30/introduction-to-semver/

Recommended materials/resources reviewed related to semantic versioning:
1) https://www.geeksforgeeks.org/introduction-semantic-versioning/
2) https://devopedia.org/semantic-versioning
3) https://www.wearediagram.com/blog/semantic-versioning-putting-meaning-behind-version-numbers
4) https://developerexperience.io/practices/semantic-versioning
5) https://gomakethings.com/semantic-versioning/
6) https://neighbourhood.ie/blog/2019/04/30/introduction-to-semantic-release/

From the blog CS@Worcester – CompSci Log by sohoda and used with permission of the author. All other rights reserved by the author.

SOLID Principles of Object-Oriented Programming

Thinking about design patterns, I decided to research the SOLID principles that reinforce the need for design patterns.

A wonderful source explaining the SOLID principles is from https://www.freecodecamp.org/news/solid-principles-explained-in-plain-english/.

The SOLID acronym was formed from:

  • Single Responsibility Principle
  • Open-Closed Principle
  • Liskov Substitution Principle
  • Interface Segregation Principle
  • Dependency Inversion Principle

The Single Responsibility Principle states that a class has only one responsibility and therefore should only have one reason that it changes.

The Open-Closed Principle states that extensions to classes can be added for new functionality, but the code of the class should not be changed.

The Liskov Substitution Principle states that objects of a subclass should be able to work with methods that expect the object of the superclass and the method should not give irregular output.

The Interface Segregation Principle relates to separating interfaces. There should be multiple specific interfaces versus creating one general interface in which there are many overridden methods that have no use.

The Dependency Inversion Principle states that classes should depend on interfaces or abstract classes rather than concrete classes and functions. This would help classes be open to extensions.

I researched SOLID from this source because it had in-depth examples of code violating the examples, and how it could be fixed. For the Single Responsibility Principle example, it showed how class Invoice had too many responsibilities and separated its methods and created class InvoicePrinter and InvoicePersistence so that each class has one responsibility for the application.

The Liskov Substitution Principle example involves a Rectangle superclass and Square subclass (because a Square is a Rectangle). The setters for the Square class are overridden because of the property of squares to have the same height and width. However, this violates the principle because of what happens when the getArea function from the Rectangle class is tested. The test class puts in a value of 10 for the height and expects the area to be width * 10, but because of the override for the Square class dimensions, the width is also changed to 10. So if the width was originally 4, the getArea test expects 40, but the Square class override makes it 100. I thought this was a great example because I would expect the function to pass the test but did not remember how assigning the height in the test would also change the width.

The examples provided were very helpful in understanding how it can be seen in code, and the diagrams were a bonus for visual representation of the code. Going forward, I will know that I have this information to reference, if need be, when dealing with class designs. It has shown me principles to help make coherent classes with detailed explanations.

From the blog CS@Worcester – CS With Sarah by Sarah T and used with permission of the author. All other rights reserved by the author.

Design smells

Resource link: https://flylib.com/books/en/4.444.1.47/1/

This week I decided I wanted to learn more about the different design smells. I understood the definitions from the activities in class, but I wanted to learn more about what exactly each design smell meant, and how to avoid them. I think it’s important to learn about the design smells so that you can know to look out for them when working on a project yourself. This is because accidently implementing one of the design smells into your code could lead to a lot of difficulty and frustration if you ever need to make changes later. For this reason, it is best to actively try and avoid them to ensure that whatever code you write will always be easy to modify and maintain.

This resource sums up exactly what each of the design smells is, why it’s bad, and what the consequences of implementing the design smells are. I liked that it uses practical examples and analogies to make the concepts explained easier to understand. While the concepts may be hard to understand because of how abstract they are, when broken down or applied to a situation everyone knows, it makes it much easier to get a grasp on what is being explained.

The resource breaks down into sections, each describing a different design smell. The first one is rigidity. Rigidity is described as when code is difficult to make even small changes in. This is bad because most often frequently code will need to be modified or changed, and if it’s difficult to even make small changes such as bug fixes, then that’s a very large issue that must be addressed.

The next design smell is fragility. Fragility is almost similar to rigidity in that it makes it difficult to make changes to code, but with fragility it is difficult because the code has a tendency to break when changes are made, whereas with rigidity things don’t necessarily break, but it is designed in such a way to where changes are very difficult to make.

Next, immobility is the design smell where a piece of code is immobile because it could be used elsewhere, but the effort involved in moving the code to where it could be useful is too hard for it to be practical. This is an issue because it means that instead of being able to reuse a piece of code, you have to write completely new code. That means that time is wasted when it could be used for more important changes.

Next, viscosity is the design smell where changes to code could be made in a variety of different ways. This is an issue because it means that time might be wasted deciding on what implementation of a change should be made. Also, disagreements might happen about how a change should be made, meaning that a team won’t be able to work towards the same goal.

The next design smell is needless complexity. Needless complexity is usually when code is added in anticipation of a change that might need to be made in the future. This could lead to bloated code that has many features that aren’t needed or aren’t being used. It is best to add code only when it’s needed to avoid this, and to reduce the overall complexity.

Next, needless repetition is the design smell where sections of code are repeated over and over, rather than being abstracted. This leads to code being hard to change or modify because it has to be changed in many different locations, instead of just one if it were abstracted. This is the benefit of abstraction, that a code modification that changes a lot of how it functions can be changed by altering code in one location.

Finally, opacity is the design smell where code is written in a way that’s hard, or difficult to understand. A piece of code could be considered opaque for a number of reasons, but in general it is when code that is understandable to one person might not be understandable to others. To avoid this, code should be written in a clear and concise manner that is easy to trace and understand, no matter who is working on it.

From the blog CS@Worcester – Alex's Blog by anelson42 and used with permission of the author. All other rights reserved by the author.

Monolithic vs. Microservice Architectures

Several of our classes have discussed two different forms of software architecture. Due to my intent to graduate with both Software Development and Big Data Analytics concentrations, this topic especially interested me. On one hand, I need to know the software architecture designs to best plan an implementation for, and on the other hand, I need to know how to deal with massive data sets. These two factors combined peak my interest as an intersection point of both fields.

A server-side application has several components, namely presentation, business logic, database access, and application integration. The Monolithic Architecture will package these components linearly, resulting in ease of development, testing, deployment, and scalability. However, its design flaws include limitations in size and complexity, difficulty in understanding, and application-wide redeployment.1

The Microservice Architecture is split into a set of smaller and connected services, each with its own architecture. Each service performs a given task(s) which is then, in one way or another, linked back to the other services. Microservice Architecture is characterized as faster to develop, independent deployment, and faster and independent scalability. However, its disadvantages include added complexity, increased difficulty in testing, and increased inter-service change difficulty.1

With these things in mind, questions such as “should I implement a Microservice Architecture or a Monolithic Architecture”, and “should I always use one or the other” arrive. What I’ve learned is that despite the fairly detailed description of both of these architectures there isn’t a uniform consensus of when to apply them. Martin Fowler says a Monolith Architecture should always be implemented first, due to the fact that the Microservice Architecture implies a complexity that may not be needed. When the prospects of maintaining or developing a monolithic application becomes too cumbersome one should begin transitioning into using Microservice Architecture2, Stefan Tilkov disagrees, however, stating that building a new system is when one thinks about partitioning the application into pieces. He further states that cutting a monolithic architecture into a microservice architecture is unrealistic.3

Would I start using Monolithic Architecture, or would I start using Microservices Architecture for software? While thinking about this problem, I know that different parts of the application are doing different things. One deals with users placing orders, which in turn requires the user to have their own UI, API, Database, and servers; and the other requires the approvers to have their own UI, API, Database, and servers. This is convincing in favor of not beginning development with a monolithic architecture design, since planning an application’s design should be thought of immediately. Therefore, I agree with Stefan Tilkov more on this issue, based on the knowledge that I have as a student.

Links:

  1. https://articles.microservices.com/monolithic-vs-microservices-architecture-5c4848858f59
  2. https://martinfowler.com/bliki/MonolithFirst.html
  3. https://martinfowler.com/articles/dont-start-monolith.html

From the blog CS@Worcester – Chris's CS Blog by Chris and used with permission of the author. All other rights reserved by the author.

REST APIs : First Look

Last week we went over docker and some general information about it. Now this week we have started to go over REST APIs in our activity’s so I decided to dedicate this weeks blogpost to it. The Source I have chosen is a fairly recent article written by Zell Liew that is lengthy and gives us some general insight on what REST APIs are and should be a good read to many who are looking to learn.

A REST API is firstly, an application programming interface(API) which allows the programs to communicate with each other and this API follows REST. REST stands for Representational State Transfer which is basically another set of rules/constraints that the API has to follow when being created. Here with REST APIs, we are able to request and get data back as a response. Zell goes over the 4 makings of a request which firstly the endpoint which is the URL we request for. The endpoint itself is made of the root-endpoint which looks something like https://api.website.com. Then there is the path which helps us finalize what we are requesting and query parameters. Then we have the method as the 2nd part which is what sort of request do we actually send which are in 5 types: get, put, post, patch, and delete which are then used to determine the action to take. Then there is the 3rd part which the the header which actually has a number of uses but they in general provide information in one way or another. Lastly, there is the data/body which contains the information we are sending to the server. Zell also goes over some more about authentication, https code, etc.

Zell’s explanation over REST APIs is a good start on helping me understand on what APIs are and their actual use in computer science. This view on how websites work is also quite interesting. This also will help in class as we potentially work with APIs further in our class activities and make the process a bit more fluid. The prevalence of REST APIs is very much apparent with it being used by big company sites like Google and Amazon and with the rise of the cloud over the years has led to increased usage of REST APIs. This information of REST APIs is a must if I am to continue to improve and potentially work in a career that has to work in areas like this as it looks like REST APIs will continue to be used for a while longer since people are comfortable with it so I should too.

Source: https://www.smashingmagazine.com/2018/01/understanding-using-rest-api/

From the blog CS@Worcester – kbcoding by kennybui986 and used with permission of the author. All other rights reserved by the author.

Docker Compose

In last week’s blog, I introduced the power of container technology, Docker. Besides the advantage of easy implementation, however, implementing a command again and again using a terminal is not a good idea long-term. Therefore, Docker Compose is a tool for defining and running multi-container Docker applications by implementing all the applications commands to a YAML file and having it executed when needed.

According to the Overview of Docker Compose, using Compose is a three-step process:

1.       Define your app’s environment with a Dockerfile so it can be reproduced anywhere.

2.       Define the services that make up your app in docker-compose.yml so they can be run together in an isolated environment.

3.       Run docker compose up and the Docker compose command starts and runs your entire app. You can alternatively run docker-compose up using the docker-compose binary.

Dockerfile is another concept, while in this blog I will introduce how to manipulate Docker compose. Let’s take this command as an example when we want to create a nginx server


docker run -d --rm \

--name nginx-server \

-p 8080:80 \

-v /some/content:/usr/share/nginx/html:ro \

nginx


This command serves the purpose of mounting an html file to the nginx server running at port 8080 locally where nginx-server is the name of the container, -p 8080:80 is mapping the local port at first parameter and the default container port at the second parameter, separated by a colon and -v /some/content:/usr/share/nginx/html:ro will mount that 8080 port with the html file at the pointed directory. Translating this command into Docker compose:


version: “3.9”

services:

                nginx-server:

               image: nginx

                                ports:

                                - 8080:8080

                                volumes:

                                - /some/content:/usr/share/nginx/html:ro


Version is the version of Docker compose we want to use, all the necessary commands will be integrated and placed under the services section. Name of the container is placed at first then all of its entities are written after a colon and one indentation in the next line. Image tag is the application we want to create, ports is equivalent to -p to map the port, and volume is the same as -v used for mounting. Under this format, we can implement as many images as we want at once and just rerun it with the command docker-compose up in the directory containing that docker-compose.yml file.


version: “3.9”

services:

                nginx-server:

               image: nginx

                                ports:

                                - 8080:8080

                                volumes:

                                - /some/content:/usr/share/nginx/html:ro

                redis-container:

                                image: redis

               


Overall, the convenience of Docker is taken to a higher level thanks to the compose concept. Also, this video is another source that I watched to understand more about docker-compose. Therefore, I suggest using the main Docker Compose page from their site and this video as a reference if you might need it in the future.

From the blog CS@Worcester – Vien's Blog by Vien Hua and used with permission of the author. All other rights reserved by the author.

Anestiblog #1

This week I read a blog post that I thought really related to the class about why software design is important. The blog starts off talking about what Software Design is and the different types of it. The blogger describes Software Design as a layout for structuring the code of your software application. The different types of software design are listed as Architectural Design, High-level Design, and Detailed Design. Each type gets its own paragraph description to teach you a bit about them. Then it goes into what the blog is about with the importance of it, separating it into four different points, and then showing a chart of the process. The blog ends with a conclusion about how their company can help you with design. I selected this blog post because I was always very interested in Software Design because I do not know much about what that area is about. This blog has a good, in-depth description of why Software Design matters, that I think every CS major should read. Many students just take a class to get the grade and not care ever again about the material anymore, and I am against that. Every area in Computer Science has something to teach you outside of school. I think this blog was a great read that I recommend for many reasons. I love how the blog separates different points and types into its own section, so that everything has its own explanation, and you do not need to go look up another site to understand information on this blog. The chart that shows the steps in Software Design flow is really well made, and shows you the right way to design from understanding the requirements to deploying the aspects of development into the design itself. It is a perfect introduction to somebody interested in working in this field, and shows exactly how good designs are made. Another reason I would recommend this blog is because of how every part of an application hinges on the design. No matter how hard you worked on your part of it , if the design is bad, the whole application will fail. I learned how modularity makes the software simpler by giving the convenience of future changes, the privileges of good Software Design, and how bad Software Design can destroy an entire application. The material affected me heavily because it showed how the Software Design is the backbone of an entire application, so it should never be taken lightly, and always should be put in the highest priority because it could make or break the application. I will take all the knowledge given to me through this blog into the future through looking even further into Software Design in the future, because it will be a big part of me trying to achieve my Software Development dreams. Now that I know why the design is important, I will spread further into what makes a design great or poor, and how design is improved.

Link : https://www.mindbowser.com/why-software-design-is-important/

From the blog CS@Worcester – Anesti Blog's by Anesti Lara and used with permission of the author. All other rights reserved by the author.

UML Diagram Designing

When it comes to programming and building a program, we need to first sometimes implement a diagram to give others an overview of the said program. I chose this blog because it contains many types of articles based upon various types of diagrams which also include detailed information about UML diagrams. I chose to pick this topic because as a software developer, when working on project on teams it helps specify different parts of the diagram to divide up the work and make the connections to make sure it builds appropriately to determine the structure.

Why Use UML?

UML stands for “Unified Modified Language”, which can be the blueprint in the world of architecture designed to make software architecture more visual to help other software developer understand how everything is implemented. There are two types of Diagrams, Structural UML diagrams, and Behavioral UML diagrams. Useful for every type of occasion such as databases. In the example below, we have a simple diagram, each main category are the classes. Each class contains. attributes and methods that are assigned to their own classes such as public, private, protected, etc. also displays the data type associated with the methods. Relationships are the connections between objects made with other classes, we have different type such associations, inheritance, aggregation, implementation, etc. Animal listed as the super class and the specific animals are the subclasses, the arrows are the inheritance relationship from the super class. 

UML Class Diagrams Tutorial, Step by Step | by Salma | Medium

After what I have learned about UML diagram designing, I have a small amount of experience prior thanks to taking the database course. But the further we progress in the course I can have a clear perspective on how go about it and how to make one when it is needed to make programming and others much is simpler. UML could have been useful during my intro to programming class because when we draw something on a piece of paper to fully see the classes, methods, objects, data types, etc. we could make programming and the connection made. This benefit will help all developed communicate effective but examining the appropriate connections, even though if may have some conflicts when it comes to the code writing phase and readjust the diagram accordingly. I have provided links before to further give more insights about how to go about using and making UML diagram and the purpose on why it is a crucial part of the software development process.

Links:

http://blog.genmymodel.com/what-you-need-to-know-about-uml-diagrams-structure-diagrams-1.html

https://www.genmymodel.com

https://www.geeksforgeeks.org/unified-modeling-language-uml-introduction/

From the blog cs@worcester – Dahwal Dev by Dahwal Charles and used with permission of the author. All other rights reserved by the author.

REST APIs

Summary:

This article helps us understand how to be able to read APIs and use them effectively by teaching us what we need to know about REST APIs. They go over The anatomy of an API request, Testing Endpoints with cURL, JSON, Authentication, and API versions. After reading this article we should be able to learn how to use cURL to perform request with GET, POST, PUT, PATCH, and delete. As well as that we should get a grasp on how to authenticate our requests with the -u option and what HTTP statuses mean.

Reason:

The reason behind choosing this article is because in class we just recently started learning about APIs and I think it is one of the most important real world skills we need in order to be a software engineer.

What I learned:

An API is an application programming interface. It is a set of rules that allow programs to talk to each other. The developer creates the API on the server and allows the client to talk to it. REST stands for “Representational State Transfer”. It is a set of rules that developers follow when they create their API. One of these rules states that you should be able to get a piece of data when you link to a specific URL. In the anatomy of a request, you have the endpoint, the mother, the headers, and the data (or body). The endpoint (or route) is the url you request for. The method is the type of request you send to the server and you can choose from GET, POST, PUT, PATCH, and DELETE. These methods provide meaning for the request you’re making. They are used to perform four possible actions: Create, Read, Update and Delete. Headers are used to provide information to both the client and server. It can be used for many purposes, such as authentication and providing information about the body content. The data contains information you want to be sent to the server. This option is only used with POST, PUT, PATCH or DELETE requests. JSON also known as JavaScript Object Notation is a common format for sending and requesting data through a REST API. On the web there are two main ways to authenticate yourself and that is with a username and password but also with a secret token. Developers update their APIs from time to time. Sometimes, the API can change so much that the developer decides to upgrade their API to another version. If this happens, and your application breaks, it’s usually because you’ve written code for an older API, but your request points to the newer API. You can request API version in two ways, and that is directly in the endpoint or in a request header.

From the blog CS@Worcester – Life as a CS Student by Dylan Nguyen and used with permission of the author. All other rights reserved by the author.

week-7

Hello, I want to write this
blog after looking over some class activities again and seeing any questions to
review, but something caught my attention. I read the word
“microservices” in some class-work exercises; I got interested and
looked it up again. I found two links that helped me understand What are
microservices? And examples from Amazon company.

 

What are microservices? 

Microservices (microservice architecture) – is an
architectural method that structures an application as a collection of services
that are

  • Highly maintainable and testable
  • Loosely linked 
  • Individually deployable
  • Organized around business capabilities
  • Owned by a small team

The microservice architecture makes applications easier
to scale and faster to develop, enabling innovation and accelerating
time-to-market for new features to reduce complex applications. It even allows
an organization to evolve its technology stack.

The pattern language guide 

The microservice architecture isn’t perfect; It has
several problems. Moreover, when using this architecture, many issues must
address.

The microservice architecture pattern language is a set
of patterns for applying the microservice architecture. It has two goals:

  • The pattern language allows whether microservices are a good place for application.
  • The pattern language allows the microservice
    architecture favorably.

Characteristics of Microservices

  • Autonomous – Each element set in a
    microservices architecture can be developed, deployed, operated, and scaled
    without affecting the functioning of other benefits. Services don’t need to
    share any code or implementation with other services. Any connection between
    individual components happens through APIs. 
  • Specialized – Each service is designed
    for a collection of capabilities and focuses on solving a specific problem. 

Benefits of Microservices

  • Agility – promote an organization of small and
    independent teams that take ownership of their services. Groups work in a small
    and well-understood context and are allowed to work independently and fast. It
    helps to shorten construction cycle times. It benefits significantly from the
    throughput of the organization.
  • Flexible Scaling – Each service is to be
    independently scaled to meet the demand for its support application. It enables
    teams to support requirements, precisely measure the cost of a feature, and
    manage availability if a service experiences a spike in demand.
  • Easy Deployment – Enable continuous combination
    and delivery, helps try out new ideas, and rolls back if something doesn’t work.
    The low cost of failure enables experimentation to update code and stimulates
    time-to-market for new features.
  • Technological Freedom – It doesn’t follow a
    “one size fits all” plan. Teams have chosen the best tool to solve
    specific problems.
  • Reusable Code – Dividing software into small
    modules, which enables teams to use functions for multiple purposes.
  • Resilience – Service independence increases an
    application’s stand to failure. With microservices, applications handle
    complete service failure by discrediting functionality and not crashing the
    entire application.

From the blog Andrew Lam’s little blog by Andrew Lam and used with permission of the author. All other rights reserved by the author.