Category Archives: Week 7

UML Diagramming

Unified Modeling Language otherwise known as UML is one of many different modeling languages/types used in programming. The purpose of modeling languages such as UML is to show the structure of code in order to better understand it. Widespread use of UML started in 1997 when it was adopted as a “standard”. UML is used with object oriented programming and is separated into two different categories (Structural and Behavioral Diagrams). Structural Diagrams show static aspects of code while Behavior diagrams show the dynamic parts of code. 

Some concepts listed that are a big part of UML are Class, objects, inheritance, abstraction, encapsulation, and polymorphism; all of these are directly related to object oriented programming. A few types of structural uml diagrams are class diagrams which show static structure, composite diagrams which show internal code structure and show how the code being analyzed acts when it comes to the rest of the program, an object diagram shows what instances are in code and their relations/associations, component diagrams show the organization of a systems physical components. Some Behavior diagrams from uml are state machine diagrams which represent the “state” of the system, Activity diagrams show a systems control, sequence diagrams show interactions between the objects. 

I chose this article as we have worked on UML diagrams in class and this article is structured well and tells the readers what exactly UML is without overloading them with information. Readers are presented with what UML is and different ways it is used to show many different things involved in object oriented programming. The site which the article is on also has multiple other articles which show readers more complex aspects of UML so if they want to dive deeper into the subject they are free and able to do so without having to look for another website for the desired information. 

I liked this article because of its simple format and feel that it helped me when working on assignments regarding uml even though we mainly went over class diagrams knowing the other types of diagrams and what they are used for was interesting and even helpful to me in understanding the significance of class diagrams. Each diagram in UML has a “specialty” you could say as each different diagram type maps out something different and even with class diagrams being rather simple they can help you understand the benefits that UML provides for software based fields.

https://www.geeksforgeeks.org/unified-modeling-language-uml-introduction/?ref=lbp

From the blog CS@Worcester – Dylan Brown Computer Science by dylanbrowncs and used with permission of the author. All other rights reserved by the author.

Docker, and why it is so important

Docker is a relatively new way of software delivery. With Docker, all that is needed is the image for the dependencies and software needed, and Docker does the rest. This is huge as before you would need to download and install specific versions of software on each and every computer you wanted to work on, but with Docker, you say you want X, Y, and Z and Docker does the rest for you, making life much easier.

The way Docker does this is through what are called “Containers”. Containers allow developers to package an application with everything that it needs, like dependencies and libraries, and then ship it out as one package. This is where the magic happens, as it means that the application will run on any other machine regardless of any customized settings that machine may have that might differ from the main machine that is being used for writing and testing the application. Docker is, in a way, like a virtual machine in this sense, but instead of running an entire virtual operating system, Docker allows applications to run the same kernel as the system they are running on, and only needs applications to be shipped with things not already on the host computer, instead of having to send out everything. This translates to a big performance increase in the speed of the application and reduces the overall size of the application as well.

Docker is also open source, one of the most important things, in my opinion, software can be these days. This means that anyone can contribute to Docker and modify it to their own needs if they need something Docker does not provide by default. This also means that since anyone can modify it, they can also see the inner workings and ensure that nothing is going on that’s not supposed to, like recording private information to sell to advertisers, for example. Docker makes life easy for developers, allowing them to not have to worry about the system that their program will be running on. Docker also gives flexibility in it’s containers and can reduce the number of systems needed for the program to run.

Source

I choose the above article because it gave a good look into what Docker is and how it is useful for software development in today’s world. It explained the key aspects of Docker and what they are useful for compared to past methods. In addition, the article gave a good analogy to Docker by comparing transporting cargo prior to the standardization of using shipping containers to make transporting good easier, as that roughly translates to what Docker does with their containers.

From the blog CS@Worcester – Erockwood Blog by erockwood and used with permission of the author. All other rights reserved by the author.

API Creation, Why Simple is Better

Whenever you interact with a program it is usually done through an API. Every website you visit, most programs you use, even your own operating system uses APIs to communicate with application. APIs are very prominent in modern computing and are essential to establishing the connection between programs and users alike.

Seeing as every programmer will interact with an API at some point or another, it is important to understand their place and how to create a good API should you need to do so. It is important to realize that characteristics such as being easy to work with, complete, and concise must be met when creating an API. Doing so can result in many gains not just in the finished product, but in other aspects of coding projects. Keshav Vasudevan put it, good API design can lead to “improved developer experience, faster documentation, and higher adoption for your API.

As the author puts it, an API must be extremely simple to use while achieving its goals. To that end Vasudevan has outlined key aspects of API design that one should take into account when creating an API. Much of it comes down to proper organization and the use of consistency to avoid confusion while utilizing the API. Small changes such as keeping URLs short and simple along with setting up resource functionality with HTTP methods can make the process much easier. Methods such as GET, POST, PUT, PATCH, AND DELETE are commonly used in these APIs along with many other methods commonly used in HTTP.

Along with proper organization while building the API, it is just as important to ensure that it is not indecipherable by others. A problem that often occurs with developers and product owners is that the developers might the meaning of a feature and implement incorrect functionality. Developers should check that all GET commands needed are implemented and have an example where one can expect a specific output on a successful run. It is important to give and receive feedback for the developers due to the disconnect that can occur between the target audience of a program and the developers creating it.

An example of the GET response as follows:

responses:
200:
description: Successfully returned information about users
schema:
type: array
items:
type: object
properties:
username:
type: “string”
example: “kesh92”
created_time:
type: “dateTime”
example: “2010-01-12T05:23:19+0000”

Once all this is done, it is important to remember that complexity should be handled ‘elegantly’ as Vasudevan puts it. This entails that there are likely large amounts of different data and information that must be accessed. A lot of these are common and it makes no sense from an efficiency standpoint to create a resource for each piece of data, therefore it is important to keep the ability of resources broad. You should keep the retrieval process simple, down to a query and parameters when searching or using a resource. Overall the API must be as user friendly as possible to ensure that it works well with future users who don’t know the API like you.

Bibliography

Vasudevan, Keshav. “Best Practices in API Design.” SmartBear.com, https://swagger.io/blog/api-design/api-design-best-practices/.

https://swagger.io/blog/api-design/api-design-best-practices/

From the blog CS@Worcester – George Chyoghly CS-343 by gchyoghly and used with permission of the author. All other rights reserved by the author.

K8 Crash Course

Blog Discovery

As a newcomer to the software industry, Kubernetes was foreign to me. This is also an article on the K8s, so please indulge me.

Applications: developers who provide a predefined service to stakeholders, Docker is a well-known runtime environment for building and building applications in containers. Creating a sample Go app; let’s create a basic Go app that we’ll use to deploy to our minikube cluster.

Kubernetes: The Kubernetes Engineering Manager is an open source container management and deployment platform. It orchestrates the clusters of virtual machines and schedules the containers to run on those virtual machines based on their available compute resources and the resource requirements of the container.

What are Pods?

A Kubernetes pod is a group of one or more containers, linked together for the purpose of Nodes and clusters: workstation that provides resources to developers a node can be thought of as the workstation that provides these resources and Kubernetes, the manager that allocates these positions to employees. In Kubernetes, a node is a working computer that can be virtual or physical. A node can have many pods, and Kubernetes automatically manages the scheduling of pods among the nodes in the cluster.

Deployment: the team’s objectives and structure defined at the start of the year a deployment provides declarative updates for Pods and ReplicaSets. In a deployment, we set a desired state and the deployment controller gradually converts the current state to the desired state. Take a look at the deployment file below deployment. The above deployment file is like a declarative template for Pods and Replicas.

The above deployment named myapp in {metadata.name}, creates a replica set to display two pods of myapp.

What are ReplicaSets?

Creating a K8s deployment kubectl apply -f deploy. Kubernetes Services: The SPOC team that routes relevant external communications to developer pods are ephemeral resources. In Kubernetes, a service is an abstraction that defines a logical set of pods and a policy for accessing them. Kubernetes services provide addresses through which associated pods can be accessed. Create a Kubernetes service kubectl apply -f service. The minikube tunnel command can be used to expose LoadBalancer services. The Minikube tunnel runs as a process on the host and creates a network route to the cluster CIDR service using the cluster IP address as a gateway. You can now use this IP address to open the service in the browser. The section on services is incomplete without an analogy with the life of developers. Imagine an external team uncertain or confused about the use of a feature developed by the development team. With that, we come to the end of the article on K8s and the Adventures of the Freshest Day One.

https://betterprogramming.pub/relating-with-docker-and-kubernetes-as-developers-an-analogue-5e662b1f817b

From the blog CS@Worcester – The Dive by gonzalezwsu22 and used with permission of the author. All other rights reserved by the author.

Microservices and their Pro’s and Con’s

Microservice Architecture is a strategy for designing software in which each part of software exists separately from the others in what is called a microservice. These microservices have their own CPU’s, exist in their own runtime environment, and are designed to do one specific job. Microservices communicate with each other through Application Program Interfaces (APIs) to form the whole of the product.

In their blog “What Is Microservice Architecture? Microservices Explained”, Johnathan Johnson and Laura Schiff discuss in detail the difference between Microservice Architecture and Monolith Architecture, a system in which the entire application exists as a single unit, and how Microservice Architecture is implemented and used. Today I want to discuss the pro’s and con’s of this architecture listed in the blog, and how I plan on using it in the future. I selected this post specifically because it goes into detail about the benefits and drawbacks of Microservice Architecture. If you would like to learn more about the specifics of the implementation, please visit the blog post linked above.

There are many pro’s to using microservices. The main reason it is implemented is because it promotes feature independence. This feature independence has many benefits, such as developer independence, isolation, resilience, and scalability. Personally I think that developer independence is the most beneficial since developers will become very competent with their microservice and will be experts at their specific job.

When a microservice wants to interact with another it makes a call to that service’s API. Since microservices exist in their own runtime environment they will continue to function even if that API call is not executed. This means that if a part of an application goes down the rest of the program continues to run, albeit at a loss of functionality. In my opinion this is the one of the best reasons to use microservices since it makes applications very resilient.

The final pro I would like to discuss is scalability. Since each service has its own dedicated CPU, its usage can be scaled to meet demand. I think this is a great byproduct of microservices, but do not believe this should be the main reason for using them.

The issues related to microservices are real, but manageable. Microservice Architecture adds complexity to a project, but this can be mitigated with training. Microservices also cost more since they use more processing power and require experienced engineers, but I believe the benefits are worth the cost. Finally microservices adds a security risk since each microservice can be attacked individually. Given an experienced team of engineers and good oversight, this risk can be lessened.

In conclusion, I believe Microservice Architecture to be a very good design choice and plan on using it in my future work. Reading the blog post was worth the few minutes it took, and I can say that I understand microservices much better now than I had before. The benefits far outweigh the risks and I can imagine many employers would already be using this architecture.

From the blog CS@Worcester – Ryan Blog by rtrembley and used with permission of the author. All other rights reserved by the author.

What is Semantic Versioning and why is it important?

Semantic Versioning 2.0.0, SemVer for short, is a format widely used by developers to determine version numbers. When implementing Semantic Versioning, you will need to declare an API (Application Programming Interface). This is because the way the version number is incremented depends on the changes made against the previous API. A semantic version consists of 3 numbers separated by decimals and is formatted like so: MAJOR.MINOR.PATCH. There are rules to follow when incrementing each of the numbers and the SemVer documentation provides a very helpful summary of when to do so:

MAJOR version is incremented when API incompatible changes have been made.

MINOR version is incremented when backwards compatible functionality has been added.

PATCH version is incremented when backwards compatible bug fixes have been made.

Next let’s take a look at each version number specifically. I came across this blog post on the Web Dev Simplified Blog and it does a good job of explaining each of the versions further. 

MAJOR Version 

The major version number is only to be incremented when any API breaking change has been introduced. The blog provides examples and a major change could be as simple as an entire library rewrite or a rework of a single component that still breaks the API. The version MUST be incremented if any backwards incompatible changes have been made. It is possible that minor and patch level changes have been made as well, but they must be reset to 0 when the major version is incremented. For example, when a major change is made to version 1.17.4, the new version number will be 2.0.0.

MINOR Version

The minor version number is incremented when backwards compatible changes have been made that don’t break the API. According to the SemVer documentation, it MUST be incremented if any public API functionality is marked as deprecated and MAY be incremented if substantial new functionality or improvements are introduced within the private code. When incrementing the minor version number, the patch level must be reset to 0 as well. For example, when a minor change is made to version 1.14.5, the new version number will be 1.15.0.

PATCH Version

The patch version number is incremented simply when backwards compatible bug fixes have been introduced. This number is commonly updated, and the only change that should be made is the fixing of incorrect behavior. For example, a bug fix for version 0.4.3, would make the new version 0.4.4.

Hopefully this blog was helpful in understanding Semantic Versioning. It is evident of its wide usage that Semantic Versioning is a useful tool to use when creating new versions. For the most part, the process of deciding the version number is straightforward, but it is neat to see that there is an actual guideline that many developers follow.

From the blog CS@Worcester – Null Pointer by vrotimmy and used with permission of the author. All other rights reserved by the author.

Software Construction Log #3 – Understanding Docker Volumes, Mounts, and data management

Though most people may not consider this an issue during the early stages of learning how to use Docker for deployment, data retention and persistence is one of the problems that one needs to consider when they need to utilize Docker containers. Previously, I wrote about virtualization using either Virtual Machines and Docker, though I mostly focused on how both essentially work on an operating system and what they require regarding system resources. What I did not mention, however, was how either Virtual Machines or Docker operate when it comes to data persistence. We are aware that the host systems that we use retain the data that we create between sessions. This means that if I power the computer that I am using now off and then on after submitting this blog post, the majority of the data that was created in the previous session will be available in the next session. However, as we begin to work with virtualization, this  issue of data persistence becomes a much greater issue for us to consider when working with Docker. In the case of virtual machines, data persistence is not much of an issue.

Data persistence for Docker containers, however, works differently. It is stated in the Docker documentation that:

The data doesn’t persist when that container no longer exists, and it can be difficult to get the data out of the container if another process needs it.

https://docs.docker.com/storage/

When a container is stopped and then restarted, whatever files that were created and used in the container will be deleted and the container will essentially run on a “clean state”. However, there is a way to guarantee data persistence on a docker container at any point through binding mounts or volumes to the container. Though Docker offers other types of binds, such as tmpfs mounts and named pipes, I will mostly be focusing on bind mounts and volumes for the remained of this post as ways of maintaining data persistence between a host machine and a Docker container.

While I was researching for more information regarding the differences between using bind mounts and volumes, I came across the following two articles, one tutorial titled Docker Volumes – Tutorial on buildVirtual.Net and one article titled Guide to Docker Volumes on Baeldung.Com by Ashley Frieze. In the Baeldung article, Frieze showcases how the Docker file system works and, in turn, how data retention is affected in a Docker container before explaining the differences between using volumes and bind mounts. Likewise, the buildVirtual tutorial also outlines the above differences, as well as showing how to utilize and delete volumes through docker commands.

Although both bind mounts and volumes can be used for data persistence, it is important to know which method to utilize depending on where we want the binds to be stored in the host system or how other docker or non-docker processes may need to interact with the specific data.

Direct link to the resources referenced in the post: https://www.baeldung.com/ops/docker-volumes and https://buildvirtual.net/amp/docker-volumes-tutorial/

Recommended materials/resources reviewed related to Docker mount and volumes:
1) https://4sysops.com/archives/introduction-to-docker-bind-mounts-and-volumes/
2) https://medium.com/@BeNitinAgarwal/docker-containers-filesystem-demystified-b6ed8112a04a
3) https://www.baeldung.com/ops/docker-container-filesystem
4) https://digitalvarys.com/docker-volume-vs-bind-mounts-vs-tmpfs-mount/
5) https://medium.com/devops-dudes/docker-volumes-and-bind-mounts-2fb4bd9df09d
6) https://docs.microsoft.com/en-us/visualstudio/docker/tutorials/use-bind-mounts
7) https://blog.logrocket.com/docker-volumes-vs-bind-mounts/

From the blog CS@Worcester – CompSci Log by sohoda and used with permission of the author. All other rights reserved by the author.

Draw Your Own Map

Deciding your concentration when you are in college is one of the most important steps that a student can take. Understand that it’s not up to your career counselor, or your professors to decide for you. You can ask for help and opinions but deciding what you want, you have to do it by yourself. Draw your own map talks about how arriving at your next step and charting the course to ultimately arrive at your ideal destination is your responsibility. With your next career step identified, visualize the smaller, interim steps you need to take to move forward. It is vitally important that you take the first step even if it doesn’t seem that significant. That first step will generate the momentum that will help carry you toward your goals. It’s the willingness to take that first terrifying step (and all the other steps later on), even in the absence of a perfect plan, that turns your map from a daydream into reality.

Until late in my senior year I wasn’t sure what I wanted to continue my concentration on. So, what helped me was analyze my strong and weak points. What I enjoyed and what not. What education should I continue after I finished my bachelor and what possibilities are there for both cases. After much thought I decided that the subject that I enjoyed more was big data. A strong background in math and statistics will put you ahead of the pack. Also, the computer languages that I really enjoy are Python and R, which are two important ones in big data. I like to play around with visualization and analyzing datasets. For me, a dataset is more than numbers and characters. I like to discover the hidden connection behind it. This keeps me going and learn more.

 Remember, there isn’t one single path that all apprentices follow. Instead, successful apprentices follow paths that share a certain family resemblance. These resemblances do not happen because apprentices are inexorably shepherded into making the same decisions by their mentors. They happen because each apprentice, consciously or not, chooses their route through life based on an overlapping set of values.

From the blog CS@Worcester – Tech, Guaranteed by mshkurti and used with permission of the author. All other rights reserved by the author.

Share What You Learn -Apprenticeship Pattern

The “Share What You Learn” apprenticeship pattern, written by Adewale Oshineye and Dave Hoover in the book Apprenticeship Patterns: Guidance for the Aspiring Software Craftsman, 2009, is about sharing your knowledge. This is for software apprentices who have solely focused on their own growth/improvement. Sharing what you learn can benefit yourself as well as others. This is because it helps you learn to communicate what you know effectively, but also helps people who are on the same path as you, trying to learn the same things.

Maintaining a blog, tutorial channel, etc. during your early apprenticeship years can be great for your growth as a developer. You may feel like you only have a basic understanding of some topics that you have just learned, but this may benefit others more than you would think. As the book states, your basic understanding will translate to an easy to understand, straight to the point article, blog post, etc. that will help other apprentices who are right behind you in the journey to becoming a master. So don’t think that you need to leave it to someone who has an extreme in depth understanding of the topic, often times what they will give back to the community will not be as beginner friendly. You must be careful when sharing knowledge though and make sure that you are not spilling a companies secrets that can ruin your relationship with your team or get you in legal trouble.

I like this apprenticeship pattern in particular because it benefits yourself and others at the same time. Teaching is a great way to learn for both the teacher him/herself, and the student/learner. A certain part of this pattern really caught my attention and that is to not worry about leaving these posts/tutorials to people who have greater knowledge of the topic. Often times when learning something new, I will go to forums rather than tutorials. On these forums you can often find threads where students are asking the same questions about a topic that you are wondering yourself, and a lot of times other students will respond to these questions with their understanding of the topic. This is a great starting point for learning something new because often times there are basic, easy to understand, straight to the point responses that can kickstart your learning of the topic.

Hoover, D. H., & Oshineye, A. (2010). Apprenticeship patterns: Guidance for the aspiring software craftsman. Sebastopol, CA: O’Reilly.

From the blog CS@Worcester – Austins CS Site by Austin Engel and used with permission of the author. All other rights reserved by the author.

Concrete Skills – Apprenticeship Patterns

The main focus of this apprenticeship pattern called concrete skills is to have a fundamental base. A person can’t just rely on having the ability to learn something, but they need to show that they already have some skills that they have learned and improved on over the years. Also, these skills directly relate to software constructing such as being familiar with a certain programming language and implementing different features that can be useful for a company. While it is important and recommended to have many soft skill qualities, you can’t solely rely on this to land a job relating to software. You need to be able to showcase technical skills and so this pattern talks about taking actions to improve skills that are needed for the certain job criteria. It recommends to look at cover letters of individuals who are currently doing the job you hope to seek and copy down discrete skills listed on it that you can use to improve yourself.

I think this pattern has a really great concept that software apprentices can use to improve their knowledge for programming and other aspects of engineering. We can’t just have the mindset that we can learn new things when the time is right and just try to learn it then. We need to do some more research on our own and try our best to not be relying on others to carry us to learn fundamentals. People in work and a team don’t want a team member who they constantly have to monitor and essentially babysit for them to complete their work. We need to learn the essentials for the required job and practice to implement those skills on our own to show a boss or team members that we are capable of doing the work. This will prove that you are in fact able to learn and down the line will be capable of learning a new software or process that is often the case with the technological advancement of the world at a rapid pace. Overall, I think once we follow this pattern in a practical sense and change our mindset, we will see vast changes in our skills as well as the way we look at learning.

From the blog CS@Worcester – Roller Coaster Coding Journey by fbaig34 and used with permission of the author. All other rights reserved by the author.