Category Archives: CS-343

Semantic Versioning

Semantic versioning is a versioning scheme in three parts; major.minor.patch. Patch is incremented with the addition of a bug fix, minor is incremented with the addition of a new feature, and major is incremented when the changes being made are incompatible with previous versions. Though it is among the most popular versioning schemes, Colin Eberhardt explains in a blog post “Semantic Versioning is not Enough” that it is not without issue.

Eberhardt describes that issues may surface when updating the dependencies of a project. There are two ways to update a dependency within a project; passively or actively. Passively updating allows a computer to update the dependency without human intervention while actively updating requires a human to manually update the dependency.

When adding a library to your project, you may specify a range of acceptable versions of that library. This hands the decision of whether to update a dependency to a new version, provided that the new version is still within range. Eberhardt warns that declaring dependencies like this could create different development environments on different machines, where one machine decides that an update is appropriate and another does not. Applications may function differently after even small bug fixes in their dependencies. In addition, passively updating to a version of a library with new features is nearly pointless. You would need to change your code to be able to use those features.

It seems more useful to actively accept updates to a dependency. This is fine for patch updates since these do not require any alterations to your code. However, updating to either a minor or major update would require you to change your code, whether it be to utilize a new feature or to prevent your project from breaking. Eberhardt argues that, because of this, the distinction between a major and a minor update is not useful.

Eberhardt also points out that a major update can often fail to convey the severity of what was changed. A trivial breaking change would increment the same as a rewrite of an entire library. Eberhardt suggests combining romantic and semantic versioning by giving names to more significant changes.

I selected this article because I thought it would be useful to see another perspective. Eberhardt seems to have a lot of experience, and he draws from that experience to write this critique. I still think that semantic versioning is a useful scheme, but it is interesting to see a critique of something so popular. I think the suggestion of naming significant updates in addition to incrementing the major number is a useful one that I will try adopting in future projects.

From the blog CS@Worcester – Ciampa's Computer Science Blog by graceciampa and used with permission of the author. All other rights reserved by the author.

Git Gud Skrub

So I’m going to have to be honest with my readers, and also I’m going out on a limb here: I am not the most experienced with Git. This is especially detrimental when I am taking a course that requires regular usage of Git, and that course is a limited time offer. What am I going to do? In my general theme of learning, I am going to practice over and over again until I finally “git gud” (as the kids would say) at Git.

Git is a version control software used to create different “branches” of a project. These branches can be local or remote; they also can “pull” information from other branches, and “push” or “merge” this information back as well. Other important controls include: rebasing, location arguments (such as HEAD) and “status-checking”. These commands, and many more, are used to ensure that software can be created or polished in separate branches while the “main” branch is still operational. When the changes are tested and ready to be added to the program, they can be “merged” back in.

As a way of “branching off” from my traditional form of blogs, I have decided to link a unique source. Instead of an article or a YouTube video, I have linked an interactive tutorial called “Git-It”. This is because I believe that the best way to learn Git is by practicing it over-and-over; similar to a “traditional” programming language, hands-on experience with Git will take a programmer further than any abstract knowledge will. In a nutshell, this tutorial will teach you all the basics needed to understand Git. These basics include, but are not limited to:

  1. Creating, modifying and deleting repositories
  2. Pulling and Pushing data from other repositories (both local and remote)
  3. How to clone material from a remote repository to a local computer
  4. Creating and merging branches

Not only is Git going to be essential for completing homework assignments, but it is practically unavoidable in the software workforce. Similar to Singleton/Strategy refactoring, Git allows for different branches to create different implementations, while all still referring to one “global, main” branch. Branches can also be used for various SemVer levels in a project; should changes need to be made, progressing or regressing a project (through committing or reverting, respectively) can be done.

Most importantly, I feel as though Git is a great tool to help me understand the concept of continuous integration. As mentioned before, Git allows for multiple branches of software; these branches can then be “forked” or “cloned” to private servers or local computers. This gives the programmer a copy of the project to work on, while the actual project is still running for its customers. This “language” combined with Docker containers, emphasizes the class theme of being able to work with software on different platforms, thus maximizing versatility.

Link: http://jlord.us/git-it/challenges/get_git.html

From the blog CS@Worcester – mpekim.code by Mike Morley (mpekim) and used with permission of the author. All other rights reserved by the author.

InfoSeCon 2021

Dr. Cunningham focuses on integrating security into operations; leveraging advanced security solutions; empowering operations through artificial intelligence and machine learning; and planning for future growth within secure systems.

https://www.youtube.com/watch?v=VBfTpmEyHy0

This video is the Keynote speaker’s presentation for the Raleigh chapter of the Information Systems Security Association for their 2021 fundraising event called InfoSeCon. I used to be a chapter member before I moved here to Massachusetts and I try to keep up with some of their announcements and events from time to time.

Dr. Cunningham was a previous Keynote speaker and to see him return this week on their Youtube channel was exciting. He’s known for being the creator of the Zero Trust eXtended (ZTX) framework which is a framework of network security protocols that prioritize network isolation and continuous monitoring and validation.

This keynote presentation primarily focuses on network security and serves as a reminder of best practices for operating a corporate network while utilizing a zero-trust framework. He uses horror movies as a theme to tie everything together and make it engaging for his audience and I really appreciate the consideration. InfoSeCon took place back in October so it was topical and showed that it wasn’t some canned presentation that he had been giving all year. It really goes to show that one of the largest hurdles to Network Security is just communicating the importance of simple practices and getting your audience, whether it be clients that hire you on to improve their company’s practices or systems or if they’re users already in your network to follow them.

That’s really one of the main appeals of a Zero-Trust framework. Traditionally, network security is treated as an Us vs Them scenario where it’s always something that is being inflicted upon the company rather than the natural conclusions of risky behaviors. ZTX operates differently; it assumes that you can’t trust the end users to know everything you know about what they should or should not do. It encourages the security professionals to segment everything and isolate as much as possible so that when one person unknowingly invites the vampire into the house, we can shut another door in its face and not have to worry about evacuating everybody.

Specifically, he stresses the importance of maintenance within your systems so that older vulnerabilities aren’t taken advantage of and make life harder in the future, to listen to your users, that moving to the cloud is likely the best solution for people dealing with a “haunted” infrastructure, as well as a lot of other really great advice.

While a lot of the best practices are simple and seem like common sense, I always appreciate a reminder so that I don’t make bone-headed mistakes that can cost either me or my employer great sums of money. It’s important as I learn more and more about Software design and architecture that I keep in mind how I could be unknowingly creating vulnerabilities that could be exploited if ever someone decided to try hard enough to find them.

From the blog CS@Worcester – Jeremy Studley's CS Blog by jstudley95 and used with permission of the author. All other rights reserved by the author.

Microservices Architecture: Uses and Limitations

Using information found by Narcisa Zysman and Claudia Söhlemann we take a closer look at microservices architecture and learn about why it is being widely used and what are its limitations.

Uses:

Better fault isolation: If one microservice fails, others will likely continue to work.

Optimized scaling decisions: Scaling decisions can be made at a more granular level, allowing more efficient system optimization and organization.

Localized complexity: Owners of a service need to understand the complexity of only what is within their service, not the whole system.

Increased business agility: Failure of a microservice affects only that service not the whole application so enterprises can afford to experiment with new processes, algorithms, and business logic.

Increased developer productivity: It’s easier to understand a small, isolated piece of functionality than an entire monolithic application.

Better alignment of developers with business users: Microservice architectures are organized around business capabilities, developers can more easily understand the user perspective and create microservices that are better aligned with the business.

Future-proofed applications: Microservice architectures makes it easier to replace or upgrade the individual services without impacting the whole application.

Smaller and more agile development teams: Teams involve fewer people, and they’re more focused on the part of microservices they work on.

Limitations:

Can be complex: While individual microservices may be easier to understand and manage, the application may have significantly more components involved, which have more interconnections. These interdependencies increase the application’s overall complexity.

Requires careful planning: Because all the microservices in an application must work together, developers and software architects must carefully plan out how to break down all the functionality and dependencies. There can be data challenges when starting an application from scratch or modifying a legacy monolithic application. Also, multiple iterations can be required until it works.

Proper sizing is critical and difficult: If microservices are too big, might have all the drawbacks of monoliths. If it is too small, the complexity of the individual services is moved into the dependency maps, which makes the application harder to understand and manage at scale.

Third-party microservices: Third-party services can change their APIs (or dependencies) at any time and in ways that may break your application.

Downstream dependencies: The application must be able to survive failures of individual microservices, yet downstream problems often happen. Building fault-tolerance into an application built with microservices can be more complex than in a monolithic system.

Security Risks: Microservices growth in popularity, may increase an applications’ vulnerability to hackers and cybercriminals. Because microservice architectures allows the use of multiple operating systems and languages when building an application, there’s the possibility of having more targets for malicious intrusions. We are also unaware of the vulnerabilities of third-party services being used.

When complexity increases, we make sure it is warranted and well understood. Regularly examining interconnected set of microservices so the application does not crash. Learning about limitations helps us come up with such solutions which we can apply in our future work and when we work on LibreFoodPantry system.

Source:

https://www.castsoftware.com/blog/microservices-architecture-a-good-or-bad-approach

https://kruschecompany.com/microservice-architecture-for-future-ready-products/#Microservices_cons

From the blog CS@worcester – Towards Tech by murtazan and used with permission of the author. All other rights reserved by the author.

Week 9: Intro to Docker

https://www.bmc.com/blogs/docker-101-introduction/#

In the above blog post by Sudip Sengupta, he introduces Docker and gives us a guide on Docker. The blog post goes over application development today, what docker is, it’s components, it’s benefits, and alternatives to Docker. In today’s application development, a common struggle is managing an application’s dependencies and technology stack across various cloud and development environment. By adopting something like Docker or a containerized framework, it allows for a stable framework without adding complexities, security vulnerabilities, and operational loose ends. Docker also makes application development easier by allowing development teams to save time, effort, and money by dockerizing their applications into single or multiple modules. Testing with Docker is also done independently (each application is in their own container) which allows testing to not impact any other components of the application. Docker also helps keep consistent versions of libraries and packages to use during the development process.

I chose this blog post because I was having a hard time in class understanding Docker. I tried to view it as a virtual machine, which was somewhat right, but is actually more efficient than a virtual machine since you won’t have to run a virtual machine and use a lot more resources than needed. Compare that to a Docker container which contains all dependencies, libraries, and config files, Docker would be the much more viable option to use in the long run in my opinion, virtual machines just take too much resources (they require significant RAM and GPU resources since they run a separate OS and a virtual copy of all the hardware the OS requires) and Docker containers, or application containers in general, are more flexible and portable than a VM.

You can see why Docker is being adopted more, it’s because it’s much more efficient and easier to use while developing an application. With this blog post, I’ve learned a lot more about Docker and why it’s used. I thought this was a good blog post because it helped get a better understanding of why Docker is used in application development and helps me familiarize myself more with technology I’ll most likely be using when I get a professional job and start developing applications with a team. In the future, I expect to have to learn more Docker or application containerization in general for my job as a software developer and having the knowledge of Docker beforehand and why developers use it, is very useful.

From the blog CS@Worcester – Brendan Lai by Brendan Lai and used with permission of the author. All other rights reserved by the author.

Unified Modeling Language (UML)

For this week’s blog post I have found a blog on Unified Modeling Language (UML). UML is an object-oriented modeling language. This has become a normal standard for documentation of a software system. It is a pictorial description of classes, objects, and relationships. It represents a plan that defines the working hardware or software system. For example, we can use a UML diagram to show what is going on in a three-class software system. For this example, the names of the three classes will be Student, Classroom, and Teacher. For each class, a box is used in UML to show the entire class with three lines separating class name, methods, and instance variables. Each variable is listed as the variable type and name separated by a colon (for example, String : Id). While the methods are listed as method name with parameters (if any) and the return variable type (if any) separated by a colon (for example, getId() : String)  Now if we wanted to show that the class named Class borrows objects from the student class, we would draw a arrow pointing from student to Class. If we wanted to show that it extends the class, we would draw a dotted arrow instead.

UML was designed and created back in the 90’s. This was a period where object-oriented languages (OOL) such as C++ were being used to build complex but compelling systems. The issue during this time was that we had complex systems but no good way of showing on paper what the system is doing. This was until in 1994 when software engineers Grady Booch, Ivar Jacobson, and James Rumbaugh of Rational Software created the UML language. This development of the language was finished two years later in 1996. Each of the designers came together to find a language they will reduce the complexity. According to Study Section, the website with the blog, they say on their process of reducing this complexity, “Booch’s method was flexible to work with throughout the design and creation of objects. Jacobson’s method contributed a great way to work on use-cases. It further has a great approach for high-level design. Rumbaugh’s method turned out to be useful while handling sensitive systems. Behavioral models and state-charts were added in the UML by David Harel.” (Study Section) In 1997, the Object Management Group (OMG) acknowledged UML as a normal language. They now are responsible for maintaining the UML language and updating with new languages that come out.  

From the blog CS@worcester – Michale Friedrich by mikefriedrich1 and used with permission of the author. All other rights reserved by the author.

Rest API Design

https://swagger.io/resources/articles/best-practices-in-api-design/

The article I’ve chosen is based on Rest API Design, a topic that we are currently working on in class. Many important parts of Rest API Design are talked about throughout the article some of them being Rest API Design characteristics, collections and their resources, HTTP methods and more. According to the article a good api should be easy to read, complete/concise and hard for a user to misuse which is all important not only for integration of the api but also when using an api with any kind of application that it was designed for. Collections, resources and URLs are also very important to designing an effective Rest API as collections and resources allow the passing of data throughout the API and its functions. Using the proper URL helps you manipulate which data is shown at a specific time whether its the data of a specific user or an error code.

Describing the purpose or function of resources is also very important and it is done through the use of different HTTP methods. The methods listed in the article include get, post, put, patch and delete. Each of these HTTP methods is used to retrieve, create, update or delete resources. ANother way to develop a good rest API according to the article is by providing feedback for users by implementing error codes when the API is used incorrectly whether there is error in an entry by the user or if the user uses the incorrect operation when entering data. The responses you provide or error codes can help users and yourself understand your API better which will also help you when testing the functions of your API so you know how any commands or entries should be formed during use. The author also highlights that using examples in your get methods is important as it shows exactly what the user can expect when they “successfully call the API function”.

I chose this article as I found it helpful with our current class topic and it outlines all the basics of Rest API design and helps student developers like myself and my classmates better understand how to effectively design a Rest API. This article helped me understand the use of HTTP methods further as well as the importance of examples and resources which helps greatly with our current class subject. The article helps put the design of Rest API’s in an easily presentable state for students or other people in general who are interested in developing such an API.

From the blog CS@Worcester – Dylan Brown Computer Science by dylanbrowncs and used with permission of the author. All other rights reserved by the author.

Why Use Docker?

Docker is an open source technology which enables the easy use of containers for software development. Steven J. Vaughan-Nichols, in his article “What is Docker and why is it so darn popular?”, writes that “… over 3.5 million applications have been placed in containers using Docker technology and over 37 billion containerized applications have been downloaded.” Needless to say, Docker is quickly becoming industry standard with “… almost 40 percent market-share growth in 12 months.” For anyone looking to get into software development, it is an absolute requirement to learn how to use Docker or something very similar.

Docker containers attempt to copy a virtual machine setup without actually running an entire virtual machine. The main difference is that Docker containers all run on a single operating system instead of emulating multiple different operating systems. This allows for much more efficient usage of resources, and it is estimated that between four and six times as many applications can be run on a Docker system as opposed to the traditional virtual machine setup.

One major reason why Docker is so versatile is that it allows your project to be packaged with all of its requirements on the container; because of this you would not need to pre-install dependencies on each machine you wish to run your application on. This also allows you to run your applications on cloud services very easily; in fact, Docker is specifically tailored to run on cloud services such as Puppet, Chef, Vagrant, and Ansible.

Since Docker is easily integrated on cloud servers this means that it is also easy to emulate a live server build. When a developer wants to test a change on a live server they can simply run a container which is set up in the same way as a live container. These containers can be tested very fast and safely, and it is reported that developers who used this method had “three times lower rate of change failure”.

Personally, I believe Docker is the future of software development. There are many benefits to using Docker with very few drawbacks, if any. Although Docker may take some time to learn and get used to, it most certainly will make developers’ jobs much easier in the long run. I anticipate that Docker will quickly become industry standard due to the many benefits listed above. Docker is a great software which will allow designers to run more applications, on a multitude of different machines, with quick turnaround on updates and fixes.

From the blog CS@Worcester – Ryan Blog by rtrembley and used with permission of the author. All other rights reserved by the author.

REST APIs

REST is an acronym for Representational State Transfer, and is an architectural style for distributed hypermedia systems. There are guide principles and constraints that need to be met for an API to be referred to as “RESTful”. In total, there are six principles/constraints for the REST API. These include Uniform Interface, Client-Server, Stateless, Cacheable, Layered System, and optionally Code on Demand.

Having a uniform interface allows one to simplify the overall system architecture, and helps to improve the visibility of interactions. Within this principle of uniform interfaces, there are four constraints that must also be met to be RESTful. These include identification of resources, where each resource must be uniquely identified, manipulation of resources through representations, so that resources must have a uniform representation in the server response, self descriptive messages, where each resource representation should have enough information to describe how to process the message, and hypermedia as the engine of application state, where the client should only have the initial URI of the application, and the application should drive all other resources and any interactions through the use of hyperlinks.

Client server is much simpler, as it only refers to the separation of concerns, so that the client and server components can evolve independently of each other. This allows one to separate the user interface from the data storage, and allows independent improvement of each without interrupting the other.

Statelessness refers to each request from the client to the server containing all of the necessary information required to completely understand and execute the request. In addition, the server cannot take advantage of any previously stored context information on the server.

Cacheable refers to a response needing to implicitly or explicitly label itself as cacheable or non-cacheable. If a response is marked as cacheable, then the client application can reuse response data later for any equivalent requests to help improve overall performance.

A layered system allows the architecture to be composed of many hierarchical layers by constraining the behavior of the components. For example, each component in a layered system cannot see beyond the layer that they are interacting with.

Lastly, REST optionally allows for client functionality to extend by downloading and executing code in the form of scripts. By downloading this code, this reduces the number of features that are required to be pre-implemented in the server.

Source

I chose the above article because I wanted more information on what it meant for an Application Programming Interface to be “RESTful”. The above article went above and beyond what I could hope to find and provided a lot of information on exactly what it meant for an API to be RESTful. Because of it, I now know exactly what it means for an API to be RESTful.

From the blog CS@Worcester – Erockwood Blog by erockwood and used with permission of the author. All other rights reserved by the author.

Full Stack Web Apps

Recently, in our “Software Constr, Des & Arch” class, our Professor has deployed two projects for us to play around with. One called “API”, and one called “Back-end”. This is the first time during our Computer Science program, we have seen a project that involves multiple computing languages. To get it to run, you need all environments to be installed. The project comes with Docker Files. Using Docker on our machines, these docker files tell Docker which environments are needed to run the project. And it will install them for us. Without Docker, each one of these environments and extensions would need to be installed manually. Docker automates this process for us.

Although we are still in the early stages of playing around with the “API” and the “Back-end”, we should be able to understand what is going on in our machines as we install and deploy these environments. We have a Dockerfiles which are used for the base image for the backend. and the test runner. We have 3rd party JavaScript libraries for the backend. And more like, yarn to handle these dependencies. All of these are needed for the entire project to run. It does not work if any one of these environments are missing.

Although I am claiming this all as a project, the true description would be a Full Stack Web App. The API and the Back-end work together systematically to bring this Web Application together. Essentially what we are trying to do, is build a Web Application that automates the process of running a Food pantry online. This Web Application will allow the users to create accounts and sign in. It will allow the users to place orders or donate food. All the actions the outside users do, will affect our Database. Our Web Application will have scripts that automate this process. Allowing us to safely share/manipulate our database of food with the outside world. Where the users request changes and we can safely make these changes.

This tutorial gives insight on how to build a Full-Stack Web App using Springboot.

https://milanwittpohl.com/projects/tutorials/Full-Stack-Web-App/the-backend-with-java-and-spring

Springboot uses gradle to build the project. And also manages the dependencies needed to run. Although our App is a little different, the idea is still there. And in this link you will find a little more description for some of the components of our project. Because is shares similar components used in this tutorial.

From the blog CS@Worcester – Andrew Sychtysz Software Developer by Andrew Sychtysz and used with permission of the author. All other rights reserved by the author.