Category Archives: Week 8

Container Orchestration

Christian Shadis               

Docker has been the focus of my Software Construction, Design, & Architecture class for the past couple weeks. In a software development environment, it is paramount that applications can run on different operating systems and in different runtime environments. When applications rely on several dependencies, it becomes cumbersome to install all needed libraries on every machine the application runs on. This is where Docker comes in handy, allowing the programmer to create a self-sufficient container holding all parts of the application, along with all of its dependencies. Anybody can now run an application using its container without having to install any dependencies on their local computer.

Applications, however, are often designed as microservices, each microservice in its own container. A software company may have tens, hundreds, or thousands of different containers that need to be deployed and monitored at once. It is plain to see how this becomes a scaling issue. Container orchestration emerged to address the scalability of containerizing applications. Container orchestrators, like Kubernetes or Docker Swarm, allow the automation of repetitive work related to the deployment and maintenance of containers, such as configuration, scheduling, provisioning, deployment, resource allocation, scale of containers, load balancing, monitoring, and facilitating secure interactions between containers. This is why I chose to read about container orchestration from the article “What is Container Orchestration” from Red Hat.

The article goes into detail on what container orchestration is used for and why it is necessary, along with listing its major functions. It also describes how container orchestration programs like Kubernetes are configured, along with a basic overview of the anatomy and behavior of a Kubernetes cluster.

Using Docker to containerize applications is a pivotal skill for developers to have. In a world where so much computing is moving toward cloud technology, however, it is also important to be able to use Docker Swarm or Kubernetes because a large portion of applications a developer will work on will be deployed on the cloud in some way. In those situations, traditional Docker knowledge will be of little use. Instead, the developer should be able to leverage a Kubernetes cluster or a Docker Swarm to work with large containerized cloud-based applications.

Before reading this article and writing this entry, I had no exposure to container orchestration, though I had wondered about the scalability of the small docker containers we have been working with in class. I learned the basics of the subject, and accrued a list of further references to read more about applying container orchestration in an enterprise setting.

https://www.redhat.com/en/topics/containers/what-is-container-orchestration

From the blog CS@Worcester – Christian Shadis' Blog by ctshadis and used with permission of the author. All other rights reserved by the author.

Benefits of using REST API

Application Program Interfaces (API’s) are a set of queries/commands that a user is allowed to use, and are necessary when using microservice architecture. API’s can be considered the messengers between microservices, since they transfer data between services. REST API’s are API that conform to the REST model, which use JavaScript Object Notation (JSON) to transfer data. JSON is a human-readable data format and is used because it is easy to understand. In a blog post “What is an API? A Digestible Definition with API Examples for Ecommerce Owners” Matt Wyatt explains what an API is and why you would want to use it.

API’s use endpoints, which are URL’s that execute a certain method in the API. There are 4 default methods that are standard practice: GET, PUT, POST, and DELETE. GET will retrieve data from the API, PUT will update an existing entry in the API, POST will create a new entry in the API, and DELETE will delete an entry in the API. These methods are just the standard methods, and many others may be created by the developer of the API.

API’s are used in most applications available today. Whenever you do anything online, you are most likely using an API. API’s are so ubiquitous that it would be more difficult to find an application which does not use them. Using Facebook as an example, you would use a login API, a search API, a feed API, friend request API, and possibly many more which are not obvious to the user.

Benefits of using such API’s include increased security, faster response time, and the ability to scale certain features when needed. API’s allow applications to restrict what data they allow in and out, which greatly increases security. This control is paramount when used in an application which contains sensitive information, such as personal details or banking information.

Personally, I believe API’s to be an excellent solution to a wide range of problems. They can easily allow access to data and restrict it at the same time. I see similarities between API’s and Java classes; good Java classes have access methods (getters/setters) which control how the class is used, their attributes are mostly private, and their implementation is kept a secret from the user. A good Java class can be used by someone who has no knowledge of how it works behind the scenes, and the same is true for API’s.

In conclusion, I think API’s are a useful tool that I most certainly will be using in the future. Regardless of my future job, frontend or backend, I will need to either create API’s or use them. They are so universal that avoiding them is almost impossible, although I wouldn’t want to avoid them. They offer a simple solution to a very complex problem, and provide extra benefits along with it.

From the blog CS@Worcester – Ryan Blog by rtrembley and used with permission of the author. All other rights reserved by the author.

An Insight on Deep Class Design

Last week we took a look at API Design and discussed the importance of keeping an API as simple as possible. (Learn more about last week’s post here: https://georgechyoghlycs343.wordpress.com/2021/10/28/api-creation-why-simple-is-better/)

This week we will take a look at how this simplicity should be applied not only to APIs but to all programs regardless of purpose or language. As programs become more and more complex it only gets harder to properly keep track of every facet of said program. Having to remember every method and class separately can quickly become a grueling task that eats away at the time, productivity and morale of a software developer.

Within this video the speaker, a professor John Ousterhout, gives their take on software design and what kind of mindset to take when designing your software. One of the major topics of this seminar is the idea of ‘Deep Classes’ which returns to the basic idea of abstraction. Ousterhout bring into focus the issues of ‘Shallow Classes’ which are small classes or methods that provide very little functionality. An example of this is shown below:

private void addNullValueForAttribute(String attribute){
data.put(attribute, null);
}

As Ousterhout states, this is a method that requires a full; understanding of the functionality and does very little to hide information (https://youtu.be/bmSAYlu0NcY?t=918). This method is essentially adding complexity with no benefit which is a net loss in the world of software. With this example in mind, Ousterhout states the biggest mistake people make in software design, “too many, too small, too shallow classes”. They attribute this to what many have been told throughout their career which is to keep methods small. This is problematic because it can actually increase complexity of a program as every class and method adds small amounts of functionality. 

This is especially true in things like the Java Class Library which has many small classes and methods with small functionality. To read a file Java requires you to create three separate classes to give the file input, buffering, and object input. In contrast Ousterhout brings up that UNIX has all of this wrapped into one object that being the UNIX file system which takes care of all other processes such as disk space management in the background.

So why does this matter in the end? The main point to get across is that abstraction is just so important in modern software development. UNIX abstracted its file systems which allows developers to spend little time worrying about file I/O implementation to allow for greater systems to be built. If something is used as often as File I/O, then it is worth it to create an all encompassing class/method for it. As long as classes are well organized there is no reason they cannot be large and have a lot of functionality.

From the blog CS@Worcester – George Chyoghly CS-343 by gchyoghly and used with permission of the author. All other rights reserved by the author.

Microservices Architecture In-Depth

Microservices architecture is a newer type of architecture that has become popular within recent times. This architecture differs from common architectures like monolith in that microservices “is an approach to developing a single application as a suite of small services, each running their own process and communicating with lightweight mechanisms.” To better understand the microservices architecture, it can be useful to compare it to the classic monolith architecture. In the monolith, there is one single unit, one central server, if you will, that does everything. One of the biggest problems with this design is that if anything needs to be done with the server, like maintenance or something goes wrong and the server goes down, the entire system gets taken down as a result of being one big unit. Microservices fixes this by having many small units/servers running almost independently of each other, so while some may rely on others for some information, if one part goes down for whatever reason, it does not take the entire system down with it.

A good example of how a microservices operates in comparison to monolith is say, for example, we have two main servers in our system, one for the admins, another for the standard users. This way the admins have their server and database, while the users have their server and database. This way, if maintenance has to be done on the admin server, whether it is updating database schema or whatever it may be, this can be done without affecting the standard users, as their server and database are separate from the admin’s server. This is nice as it can limit downtime for the users, so maintenance can be done on the user’s server only when absolutely necessary to limit outages.

Microservices architecture also puts a big focus on failure detection, and trying to automatically restore the service if it goes down. These services are built in such a way that allows early error/failure detection, so that fixes can be put out as fast as possible. Microservices are also built with upgradeability and scalability in mind, so that as a service grows and requires more and more processing power, it is easy to add more servers to add the extra power needed. With monolith, for example, this requires getting a whole new stronger server, rather than just adding another server, since the monolith is ‘all-in-one’, it does not allow the easy upgrading that microservices allows.

Source

I chose the above source from martinfowler.com because we had some readings from that website earlier in the course, so since they had an article on microservices I though what better site to use than them. They also provided a lot of good information on the topic with clear and concise explanations of the topics.

From the blog CS@Worcester – Erockwood Blog by erockwood and used with permission of the author. All other rights reserved by the author.

Software Construction Log #4: Understanding Semantic Versioning

          Software releases are not always one-and-done affairs; more often than not software that we use is actively being worked on and maintained for an extended period of time well after its initial release. We can expect that software, during its support lifetime, will undergo several types of changes, including implementation of new features and various vulnerability fixes. However, it is important to note that such changes to software are as important to properly document as the technical aspects of the software, such as its use and structure as they were conceived during development. Such documentation of changes is often referred to as “Software Versioning” and it involves the process of applying a version scheme in order to track the changes that have been implemented to a project. While developer teams may develop their own scheme for versioning, some may prefer to use Semantic Versioning (https://semver.org/) as a means of keeping track of changes.

          Semantic Versioning is a versioning scheme that applies a numerical label to a project, which numerical label is separated into three parts (X.Y.Z), which then parts are incremented depending on the type of change that has been implemented. These parts are referred in the documentation as MAJOR.MINOR.PATCH and defined as:

1. MAJOR: version when you make incompatible API changes,
2. MINOR version when you add functionality in a backwards compatible manner, and
3. PATCH version when you make backwards compatible bug fixes.

https://semver.org/

The way semantic versioning works is that, when incrementing the left most part, the progress of the remaining parts is reset to zero, meaning that if a major change is implemented then the minor and patch numbers are reset to zero. Likewise, when a minor change is implemented, the patch number is reset to zero. While this scheme is relatively straightforward in and of itself, the naming convention of the numerical labels (specifically major and minor) may confuse some due to the ambiguity that such names may present. However, there is another naming convention that applies to semantic versioning, which defines the numerical version label as (breaking change).(feature).(fix).

          Though both naming conventions are used, I find the later to be far more straightforward to understand and utilize, as the names can give one a better idea of the importance of a newly implemented update. As I was researching on more resources regarding Semantic Versioning, along with the official documentation, I came across the following archived article on Neighbourhood.ie titled Introduction to SemVer. In this article, Irina goes into further detail regarding semantic versioning by explaining the naming of each component, as well as noting the difference between the two naming conventions.

          Although they go into further detail into semantic release in another article, this article sufficiently covers the fundamentals of semantic versioning. This versioning scheme is not the only way to  software development, it is still an important tool that can help in documenting a project’s history during its support lifetime and outline important changes more clearly and efficiently.

Direct link to the resource referenced in the post: https://neighbourhood.ie/blog/2019/04/30/introduction-to-semver/

Recommended materials/resources reviewed related to semantic versioning:
1) https://www.geeksforgeeks.org/introduction-semantic-versioning/
2) https://devopedia.org/semantic-versioning
3) https://www.wearediagram.com/blog/semantic-versioning-putting-meaning-behind-version-numbers
4) https://developerexperience.io/practices/semantic-versioning
5) https://gomakethings.com/semantic-versioning/
6) https://neighbourhood.ie/blog/2019/04/30/introduction-to-semantic-release/

From the blog CS@Worcester – CompSci Log by sohoda and used with permission of the author. All other rights reserved by the author.

Looking at the Interpreter Pattern

“Interpreter Pattern – Spring Framework Guru”

The above article describes in detail how the Interpreter pattern from the gang of four book works. Before I continue, I should specify that the book, this post and the above article are focused on the Interpreter pattern primarily in terms of object oriented programming, which is not the only way that the general concept may be applied. There is the general idea, which is creating a small, limited language inside of a program, and then there is the gang of four specification, which includes more specific implementation details. For example, it mentions regular expressions, which do not necessarily have to be written in an object oriented way.

I sort of understood what the interpreter pattern was in a broad sense, but not in the specific technical sense one needs to actually make use of it. For example, I didn’t know that it employed the use of another pattern, called the Composite pattern. The Composite pattern is essentially the idea of treating individual objects and compositions of objects uniformly by representing them as a tree. The Interpreter pattern uses the Composite pattern in order to represent an expression to be interpreted. It does this by representing the components of the expression as atomic expressions arranged in a tree called an “abstract syntax tree,” and then expressing the language by recursively processing each node in it. The components of this expression may be either non-terminal or terminal, representing operations with and without operands, respectively (for example, a multiplication versus a constant). The pattern also specifies a “context,” which represents string data that is parsed, and a “client,” which actually parses the expression.

This pattern may be useful in a case where I want to create a smaller, more focused programming language inside of a project for use by less technically oriented people. Specific potential use cases like this include a language for writing plugins in an art program or a simplified language for directing game behavior inside of a game engine like GameMaker’s GML. Other, more common examples include SQL parsing and, as was mentioned previously, regular expressions.

At its core, the Interpreter pattern is essentially about converting instructions from one language to another, much like how a language interpreter interprets ideas from one language to another. “Interpreted” languages such as Python can be seen as an instance of this idea, using an interpreter to convert Python expressions into work by the computer. In the same way, you might also view a C compiler as an interpreter that converts C instructions to machine instructions.

From the blog CS@Worcester – Tom's Blog by Thomas Clifford and used with permission of the author. All other rights reserved by the author.

Breakable Toys – Apprenticeship Pattern

The “Breakable Toys” apprenticeship pattern, written by Adewale Oshineye and Dave Hoover in the book Apprenticeship Patterns: Guidance for the Aspiring Software Craftsman, 2009, is about creating projects on your own in order to learn from them. Experience is more often built through failure than success.

Sometimes, in the workplace, it is not acceptable to fail when people are depending on you. This places a pause on your learning. As the book explained, 3-ball jugglers will not be able to step up to juggling 5-balls without trying and failing first. Only the jugglers who keep trying and failing will be able to move up to juggling 5. This is the same with software development and why the authors recommend you make “breakable toys”. This means creating your own projects on your own time that are fun to work on. During the development of these projects you can fail and not hurt or let anyone down. This allows you to grow and improve your skills.

The authors recommend building a wiki as it helps you “record what you learn” (see my other post) and also teaches you a deeper understanding of web development such as HTTP, REST, data migration and concurrency. This is a great way to learn about web environments. It recommends starting small with just an interface, then as your skills improve you can experiment with things such as tagging and ranking algorithms. Another recommendation is to build a new game every time you learn a new language. These are simple games such as tic-tac-to, Tetris, or Snake. This will help solidify your knowledge of the new language. These projects are meant to be low risk to allow room for failure, and also to be fun. If it is not fun, another project will gain your attention and the one you are currently working on is going to gain dust.

The main point of this pattern is to create opportunities to venture outside your boundaries. If you are stuck only doing what you know then you wont learn anything new. When learning something new, often times you will fail. It is the best/only way to really learn.

From the blog CS@Worcester – Austins CS Site by Austin Engel and used with permission of the author. All other rights reserved by the author.

Practice, Practice, Practice

Last semester I had to learn a new programming language for a class. It was a short time of period to learn it and I had to use it in a project. I really wanted to learn how to code in R and it was something that I enjoyed it. But the thing that helped me become better in it is practice. Take the time to practice your craft without interruptions, in an environment where you can feel comfortable making mistakes.

The book describes an ideal world where a mentor would assign you an exercise based on her understanding of your strengths and weaknesses. When you finished the exercise, the mentor would work with you to rate your performance using an objective metric and then work with you to devise the next exercise. Even though this is a good practice and would really help a lot of people, we do not live in an ideal world, and must fall back on our own resources to achieve the same effect. So we have to practice to understand our weakness and transform them.

The key to this pattern is to carve out some time to develop software in a stress-free and playful environment: no release dates, no production issues, no interruptions. Now that I’m more free to practice R I can see the benefit of this pattern. I usually find exercises, small projects, books online and practices my knowledge on those. It is more fun and I’m not worried about meeting a deadline or missing any instructions. As Dave Thomas says of practicing, “It has to be acceptable to relax, because if you aren’t relaxed you’re not going to learn from the practice.”

There are some things that you have to keep in mind when you practice though. Practice makes permanent, so be careful what you practice, and constantly evaluate it to ensure you haven’t gone stale. A good way to ensure you have interesting exercises to use in your practice sessions is to trawl through old books. I personally like to get my information in different sources and then compare. I feel like you learn more this was. Take that knowledge and try to find or devise a new exercise that will have a measurable impact on your abilities. Repeat.

References:


Apprenticeship Patternsby Adewale Oshineye; Dave Hoover, Published by O’Reilly Media, Inc., 2009

From the blog CS@Worcester – Tech, Guaranteed by mshkurti and used with permission of the author. All other rights reserved by the author.

Rubbing Elbows-Apprenticeship Pattern

In this post, I will be writing about the “Rubbing Elbows” apprenticeship pattern from the book Apprenticeship Patterns: Guidance for the Aspiring Software Craftsman by Adewale Oshineye and Dave Hoover, 2009. This pattern is for people who typically work alone when it comes to developing software and feel as if they had reached a plateau, not learning superior techniques and approaches.

The Rubbing Elbows apprenticeship pattern suggests that in order to cure this, you should work side-by-side with another software developer to complete hands on tasks. This can help you learn things that cannot be taught in a classroom or online. This is because you will pick up on certain micro-techniques that you can only really obtain through experience or being around/working with another developer. These techniques can add up, providing a significant increase to your skill. An example of this pattern given in this book is pair programming. When used correctly, it can be one of the best ways to learn. Pair programming (especially with a mentor) can help you pick up other skilled developers’ habits and lets you observe how they polish those habits to improve their own skill.

If you do not have this opportunity at your workplace, the book suggests that you find someone who is interested in contributing to open source projects. It suggests you should take one night a week to work with this individual on the project in a sort of pair programming manner, learning from each other as well as motivating each other.

I completely agree with what this pattern says. Working together with someone on a project or tasks, side-by-side, especially with someone with greater skill can greatly boost your skill, and exposes you to certain things that you cannot be taught directly. Not only with coding, but with other hobbies in my life, being around others really helped me pick up on things quicker. For example, I started snowboarding last year. I had three friends who I would primarily ride with, two extraordinarily better, and one the same skill level. I had been motivated by the one the same skill level, and learned from him by observing certain techniques and habits that helped him improve. I had also picked up certain styles, techniques and habits of the ones already at a high skill level, boosting my knowledge and understanding. Now a year after starting this new hobby, I am more skillful than 70% of people on the mountain(not to be smug). This is greatly because of going with my peers.

Hoover, D. H., & Oshineye, A. (2010). Apprenticeship patterns: Guidance for the aspiring software craftsman. Sebastopol, CA: O’Reilly.

From the blog CS@Worcester – Austins CS Site by Austin Engel and used with permission of the author. All other rights reserved by the author.

The Long Road – Apprenticeship Pattern

What this apprenticeship patter talks about is that this journey of a software developer and that of a software craftsmen is all about the long- term. You have to have the long-term mindset and vision to always be learning and realize that you will never be done. People now a days are looking for ways to quickly do everything and have the ability to do something overnight. Well, when it comes to software development and the journey that needs to be followed, this won’t be realistic. In order to build the skill of the best software developer, you need to realize that this will take time and years of practice and learning. I think this is a good mindset to have in order to not give up and keep on pushing yourself when times get tough. One thing that I have seen over the years with coding is that when people start to have a tough time with it and can’t seem to figure something out, they give up and self-declare that programming isn’t for them. They didn’t even realize that with just a little bit more effort and time they would have figured and become better coders, while also increasing their confidence. This shows to me that no matter what adversity you face, that you need to keep on going and not give up at a time where you need to put in the most effort. Once you past that barrier, the road just starts to get a little bit easier and you will become more capable of dealing with problems, that just a while back you thought was impossible.

Taking the long road is a key in the software world and is also a key in our everyday life. I’m sure there were times when we did not want to do something and thought of something as hard, but looking back at it seems silly to us. This is why when you keep a long-term vision, the issues that come in the present time won’t seem so big because you know at the end it will all be worth it. The software craftsman is striving to keep on improving and learning because they know it will all be worth it and the journey will be a great one. Overall, the message is to keep on striving to learn and build up skills and work hard to break through barriers by not giving up.

From the blog CS@Worcester – Roller Coaster Coding Journey by fbaig34 and used with permission of the author. All other rights reserved by the author.