Category Archives: Week 9

REST APIs

REST is an acronym for Representational State Transfer, and is an architectural style for distributed hypermedia systems. There are guide principles and constraints that need to be met for an API to be referred to as “RESTful”. In total, there are six principles/constraints for the REST API. These include Uniform Interface, Client-Server, Stateless, Cacheable, Layered System, and optionally Code on Demand.

Having a uniform interface allows one to simplify the overall system architecture, and helps to improve the visibility of interactions. Within this principle of uniform interfaces, there are four constraints that must also be met to be RESTful. These include identification of resources, where each resource must be uniquely identified, manipulation of resources through representations, so that resources must have a uniform representation in the server response, self descriptive messages, where each resource representation should have enough information to describe how to process the message, and hypermedia as the engine of application state, where the client should only have the initial URI of the application, and the application should drive all other resources and any interactions through the use of hyperlinks.

Client server is much simpler, as it only refers to the separation of concerns, so that the client and server components can evolve independently of each other. This allows one to separate the user interface from the data storage, and allows independent improvement of each without interrupting the other.

Statelessness refers to each request from the client to the server containing all of the necessary information required to completely understand and execute the request. In addition, the server cannot take advantage of any previously stored context information on the server.

Cacheable refers to a response needing to implicitly or explicitly label itself as cacheable or non-cacheable. If a response is marked as cacheable, then the client application can reuse response data later for any equivalent requests to help improve overall performance.

A layered system allows the architecture to be composed of many hierarchical layers by constraining the behavior of the components. For example, each component in a layered system cannot see beyond the layer that they are interacting with.

Lastly, REST optionally allows for client functionality to extend by downloading and executing code in the form of scripts. By downloading this code, this reduces the number of features that are required to be pre-implemented in the server.

Source

I chose the above article because I wanted more information on what it meant for an Application Programming Interface to be “RESTful”. The above article went above and beyond what I could hope to find and provided a lot of information on exactly what it meant for an API to be RESTful. Because of it, I now know exactly what it means for an API to be RESTful.

From the blog CS@Worcester – Erockwood Blog by erockwood and used with permission of the author. All other rights reserved by the author.

GitLab and the growth within

Collectives™, which launched this past June, is a new offering that creates a set of spaces where content related to certain languages, products, or services is grouped together on Stack Overflow. These spaces make it easier for users to discover and share knowledge around their favorite technologies. With the launch of its Collective, GitLab will continue to build on the collaboration that already exists with the community of developers and contributors using its platform. “Community is at the core of GitLab’s mission. With more than 1 million active license users and a contributor community of more than 2,400 people, we have a strong community aligned with our mission – to create a world where everyone can contribute,” said Brendan O’Leary, Senior Developer Evangelist at GitLab. “GitLab’s Collective on Stack Overflow aligns with our mission. This new space will help us to expand our open-source collaboration so contributors and developers can share and learn about version control, CI/CD, DevSecOps, and all-remote workflows. We believe the GitLab Collective will be a place where we can discover feedback and create opportunities for the GitLab community to contribute to Stack Overflow’s community.”

GitLab’s Collective is defined by a set of specific tags related to the company’s technology such as ‘gitlab’ and ‘gitlab-ci’. Users who join the collective can easily find the best answers and get in-depth technical product information about GitLab’s platform and application through how-to guides and knowledge articles. They can also see how they stack up on the leaderboard, and top contributors can be selected by GitLab as Recognized Members, users the company approves to respond to questions or recommend answers. When Collectives was launched on Stack Overflow with Google Cloud and Go Language earlier this summer, it was already seen by thousands of community members joining in. The contributions of the Collectives’ community, taken together, can help the millions of curious question askers who visit Stack Overflow, as well as users looking for a solution to a problem or a way to improve their skills. GitLab’s efforts to expand the pool for open-source collaborators aligns with their mission, to empower the world to develop technology through collective knowledge.

With such developments happening with GitLab’s, we can foresee GitLab leading the way for developers and engineers to further their knowledge and expand themselves. With the help of learning about version control, CI/CD, DevSecOps, this will definitely continue the growth for both those that use it and GitLab itself.

From the blog CS@Worcester – The Dive by gonzalezwsu22 and used with permission of the author. All other rights reserved by the author.

Understanding Docker Compose and its Benefits

Recently, I worked with bash scripts and docker-compose files in order to run a set of containers. While both seemed to be valid ways of running multiple containers in Docker, I wanted to look into docker-compose files further to understand the possible advantages and use cases. A resource that I found quite helpful was The Definitive Guide to Docker compose, a blog post by Gabriel Tanner. Inside, he explains why we should care about Docker-compose and the potential use cases.

Docker Compose

Let us first go over what Docker Compose is. According to the Docker Compose documentation, “Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application’s services. Then, with a single command, you create and start all the services from your configuration.” Instead of listing each docker run command in a script, Compose utilizes a docker-compose.yml file to handle multiple docker containers at once.

Here’s a sample docker-compose.yml file:

You might be able to recognize some of the labels such as image, ports, and volumes. All of which would normally be specified in a docker run command. And as you can see, each individual container is listed under the services tag. The docker command equivalent to run web1 would be something like: docker run -it –name web1 -p 10000:80 -v ${PWD}/web1:/usr/share/nginx/html -d nginx:mainline

Tanner explains that almost every compose file should include:

  • The version of the compose file
  • The services which will be built
  • All used volumes
  • The networks which connect the different services

Now that we have a brief understanding of the docker-compose file structure, let’s talk about the use cases for Compose and their benefits.

Portable Development Environments

As opposed to running multiple containers with a separate docker run command, you can simply use docker-compose up to deploy all the containers specified in your docker-compose.yml file. And it is just as easy to stop all of the containers by running docker-compose down. This provides developers the ability to run an application and configure the services all within a single environment. Since the compose file manages all of the dependencies, it is possible to run an application on any machine with Docker installed.

Automated Testing

A beneficial use case of a docker-compose file is with automated testing environments. Compose offers an isolated testing environment that closely resembles your local OS that can easily be created or destroyed.

Single Host Deployments

Compose can be used to deploy and manage multiple containers on a single system. Because applications are maintained in an isolated environment, it is possible to run multiple copies of the same environment on one machine. And when running through Docker Compose, interference between different projects are prevented.

Conclusion

Hopefully this blog post helped you learn more about Docker Compose as much as it helped me. For the most part, docker-compose files make it possible to run multi-container applications with a single command. While researching this topic, I’ve come to believe that docker compose files will be the standard for running applications through Docker if it isn’t already. I’d like to write more about this topic, and I think a blog post going in-depth on the structure of docker-compose files would be useful.

From the blog CS@Worcester – Null Pointer by vrotimmy and used with permission of the author. All other rights reserved by the author.

Design smells

Resource link: https://flylib.com/books/en/4.444.1.47/1/

This week I decided I wanted to learn more about the different design smells. I understood the definitions from the activities in class, but I wanted to learn more about what exactly each design smell meant, and how to avoid them. I think it’s important to learn about the design smells so that you can know to look out for them when working on a project yourself. This is because accidently implementing one of the design smells into your code could lead to a lot of difficulty and frustration if you ever need to make changes later. For this reason, it is best to actively try and avoid them to ensure that whatever code you write will always be easy to modify and maintain.

This resource sums up exactly what each of the design smells is, why it’s bad, and what the consequences of implementing the design smells are. I liked that it uses practical examples and analogies to make the concepts explained easier to understand. While the concepts may be hard to understand because of how abstract they are, when broken down or applied to a situation everyone knows, it makes it much easier to get a grasp on what is being explained.

The resource breaks down into sections, each describing a different design smell. The first one is rigidity. Rigidity is described as when code is difficult to make even small changes in. This is bad because most often frequently code will need to be modified or changed, and if it’s difficult to even make small changes such as bug fixes, then that’s a very large issue that must be addressed.

The next design smell is fragility. Fragility is almost similar to rigidity in that it makes it difficult to make changes to code, but with fragility it is difficult because the code has a tendency to break when changes are made, whereas with rigidity things don’t necessarily break, but it is designed in such a way to where changes are very difficult to make.

Next, immobility is the design smell where a piece of code is immobile because it could be used elsewhere, but the effort involved in moving the code to where it could be useful is too hard for it to be practical. This is an issue because it means that instead of being able to reuse a piece of code, you have to write completely new code. That means that time is wasted when it could be used for more important changes.

Next, viscosity is the design smell where changes to code could be made in a variety of different ways. This is an issue because it means that time might be wasted deciding on what implementation of a change should be made. Also, disagreements might happen about how a change should be made, meaning that a team won’t be able to work towards the same goal.

The next design smell is needless complexity. Needless complexity is usually when code is added in anticipation of a change that might need to be made in the future. This could lead to bloated code that has many features that aren’t needed or aren’t being used. It is best to add code only when it’s needed to avoid this, and to reduce the overall complexity.

Next, needless repetition is the design smell where sections of code are repeated over and over, rather than being abstracted. This leads to code being hard to change or modify because it has to be changed in many different locations, instead of just one if it were abstracted. This is the benefit of abstraction, that a code modification that changes a lot of how it functions can be changed by altering code in one location.

Finally, opacity is the design smell where code is written in a way that’s hard, or difficult to understand. A piece of code could be considered opaque for a number of reasons, but in general it is when code that is understandable to one person might not be understandable to others. To avoid this, code should be written in a clear and concise manner that is easy to trace and understand, no matter who is working on it.

From the blog CS@Worcester – Alex's Blog by anelson42 and used with permission of the author. All other rights reserved by the author.

Exploring Microservices

[What are Microservices? | IBM]

To prepare for my capstone, I decided to read up on microservices. This article is mainly just describing what they are, but it also mentions some common pitfalls towards the end. I was also interested in exploring criticisms of microservice architecture, but I want to keep the scope of this post short. However, I think it’s a pretty straightforward trade-off, which I will mention in a moment.

Microservice architecture is an architectural approach in which one application is composed of smaller components called services. This is distinct from other forms of software encapsulation because services essentially function as distinct programs, potentially operating on different programming languages, frameworks, and with their own databases. Rather than messages within the same process, services communicate over a network with a shared API.

Services can be updated individually without knowledge of the overall application. Different programming languages can be used for different components to better suit the needs of those components. Individual services can be scaled when necessary, rather than the whole application. This method also demonstrates the quality of “loose coupling,” which is lowering the amount of of dependencies between components as much as possible. The risk of a change to one service introducing problems in other services is still present, but minimized.

In general, this kind of technique basically combats complexity by adding complexity. It isn’t applicable to every situation. It should not be chosen if the overhead of designing the system is significantly more than the overhead of not doing so, which will often be the case for small problems.

For the purpose of the LibreFoodPantry project, microservice architecture is useful. In another scenario, such as one where high performance is a major concern, it may be a problem. If different services are busy sending messages back and forth, it can waste a lot of processor cycles on providing flexibility that isn’t really necessary.

Microservice architecture introduces complexity in communicating between all the services. Different tech stacks being used between different services raises the amount of general knowledge a person needs to be able to understand the whole application.

All this being said, I’m actually looking forward to working on a project using this architecture. The only large project I’ve worked on before was the backend for a school website, which was using a more monolithic approach. We were locked in to one specific language, and what we were doing was often a little inscrutable (although this may be in part because I was a junior in high school). I think that microservice architecture allows for more flexibility in low-level decision making, which will make using it in practice more engaging.

From the blog CS@Worcester – Tom's Blog by Thomas Clifford and used with permission of the author. All other rights reserved by the author.

Nurture Your Passion

This week I decided to talk about Nurture Your Passion pattern. I think this pattern apply to a lot of people in different direction.  Our success comes not so much from what we do, but how well we do it. It also illustrates that regardless of your job or your position on the company ladder, you can be successful if you have passion for your work. Regardless of your current job bringing passion to your work can lay a foundation for success. Not just success in your current job, but success for every rung you want to take up the company ladder. You may hate your job now, but the attitude you take towards it can play a pivotal role in your career.

The book describes a case where You work in an environment that stifles your passion for the craft and what solution you can follow.  It is hard for your passion to grow when exposed to such hostile conditions, but there are some basic actions you can take to sustain it. Find something at work that interests you, identify it as something you enjoy, and pour yourself into it. Join a local user group that focuses on something you want to learn more about. Immersing yourself in some of the great literature of our field can carry you through the rough spots when your passion is in jeopardy. Moving into an organization that offers career paths congruent with your own can protect your passion.

These are some good advice you can follow. It is hard to find everything in one place but at least if you do something you like you are going to be happier. Remember passion is an emotion, a state of mind so the first thing you have to do is motivate yourself. Turn to another emotion to find the motivation that you need. Once you have the motivation you can apply the passion. Remember it is not about how you feel about your job. It is about putting passion into your work. Maybe you need to learn new skills, or you just need to fully engage yourself in your work.

References:


Apprenticeship Patterns by Adewale Oshineye; Dave Hoover

From the blog CS@Worcester – Tech, Guaranteed by mshkurti and used with permission of the author. All other rights reserved by the author.

The Long Road

 When working on an open-source project, get in the habit of downloading the latest version of the code (preferably from their source control system) so you can review its history and track future developments. Take a look at the structure of the codebase and think about why the code is organized the way it is. Take a look at the way developers organize their code modules to see if it makes sense, and compare it to the way they might have used it. Try to refactor the code to understand why its coders made the decisions they did, and think about what the code would look like if you were the one coding it. Not only will it give you a better understanding of the projects; Also make sure you can build those projects. If you’ve found a better way to do something, you’re ready to contribute code to the project. Inevitably, as you go through the code, you’ll come across decisions you completely disagree with. Ask yourself if the developers of the project might know something you don’t or vice versa. Consider the possibility that this is a legacy design that needs to be refactored; And consider whether making a toy implementation for the relevant feature would help clarify the issue.

You end up with a toolbox filled with all sorts of quirks that you’ve collected from other people’s code. This will hone your ability to solve small problems more quickly and quickly. You’ll be able to tackle problems that others think are impossible to solve because they don’t have access to your toolbox. Take a look at the code for the Git distributed source control system written by Linus Torvalds, or any code written by Daniel J. Bernstein (known as DJB). Programmers like Linus and DJB occasionally make use of data structures and algorithms that most of us have never even heard of. They’re not magicians — they’ve just spent their time building bigger and better toolboxes than most people. The great thing about open source is that you can look at their toolbox and make their tools your own. One of the problems in software development is the lack of teachers. But thanks to the proliferation of open-source projects on sites such as SourceForge. Net and GitHub, you can learn from relatively representative code examples from the world’s programmer community.

In ODS, Bill Gates says: “The most subtle test of programming ability is giving the programmer about 30 pages of code and seeing how quickly he can read through it and understand it.” He realized something very important. People who quickly learn directly from the source code will soon become better programmers because their teachers are the lines of code written by every programmer in the world. The best way to learn patterns, idioms, and best practices is to read the open-source. Look at how other people to code. It’s a great way to stay relevant, and it’s free. — Chris Wanstrath at Ruby 2008 [

Pick an open-source project with deep algorithms, such as Subversion, Git, or Mercurial source control system. Browse through the project’s source code and jot down any algorithms, data structures, and design ideas that seem novel to you. Then write a blog post describing the structure of the project and highlighting the new ideas you’ve learned. Can you find a situation in your daily work where the same idea can be applied?

From the blog haorusong by Unknown and used with permission of the author. All other rights reserved by the author.

The Long Road

 When working on an open-source project, get in the habit of downloading the latest version of the code (preferably from their source control system) so you can review its history and track future developments. Take a look at the structure of the codebase and think about why the code is organized the way it is. Take a look at the way developers organize their code modules to see if it makes sense, and compare it to the way they might have used it. Try to refactor the code to understand why its coders made the decisions they did, and think about what the code would look like if you were the one coding it. Not only will it give you a better understanding of the projects; Also make sure you can build those projects. If you’ve found a better way to do something, you’re ready to contribute code to the project. Inevitably, as you go through the code, you’ll come across decisions you completely disagree with. Ask yourself if the developers of the project might know something you don’t or vice versa. Consider the possibility that this is a legacy design that needs to be refactored; And consider whether making a toy implementation for the relevant feature would help clarify the issue.

You end up with a toolbox filled with all sorts of quirks that you’ve collected from other people’s code. This will hone your ability to solve small problems more quickly and quickly. You’ll be able to tackle problems that others think are impossible to solve because they don’t have access to your toolbox. Take a look at the code for the Git distributed source control system written by Linus Torvalds, or any code written by Daniel J. Bernstein (known as DJB). Programmers like Linus and DJB occasionally make use of data structures and algorithms that most of us have never even heard of. They’re not magicians — they’ve just spent their time building bigger and better toolboxes than most people. The great thing about open source is that you can look at their toolbox and make their tools your own. One of the problems in software development is the lack of teachers. But thanks to the proliferation of open-source projects on sites such as SourceForge. Net and GitHub, you can learn from relatively representative code examples from the world’s programmer community.

In ODS, Bill Gates says: “The most subtle test of programming ability is giving the programmer about 30 pages of code and seeing how quickly he can read through it and understand it.” He realized something very important. People who quickly learn directly from the source code will soon become better programmers because their teachers are the lines of code written by every programmer in the world. The best way to learn patterns, idioms, and best practices is to read the open-source. Look at how other people to code. It’s a great way to stay relevant, and it’s free. — Chris Wanstrath at Ruby 2008 [

Pick an open-source project with deep algorithms, such as Subversion, Git, or Mercurial source control system. Browse through the project’s source code and jot down any algorithms, data structures, and design ideas that seem novel to you. Then write a blog post describing the structure of the project and highlighting the new ideas you’ve learned. Can you find a situation in your daily work where the same idea can be applied?

From the blog haorusong by Unknown and used with permission of the author. All other rights reserved by the author.

Craft over Art

This week I took a look at the pattern “Craft over Art”. According to this pattern, programming is a craft not an art. As software developers, we are being hired to build some piece of software to solve a customer’s problem. We should not be trying to indulge our own interests by creating something we think is beautiful. While our software can be beautiful, it must primarily be functional.

This pattern made me think about how easy it is to get drawn into the program you are writing, especially if it is a project you are passionate about. You can quickly get sucked in and start trying to put every bell and whistle you can in, especially with projects that include a visual aspect. As students it is especially important to not go overboard. While extra effort is appreciated by instructors, you cannot get over an A grade anyway. That time should be spent on other things.

An important point here is also that as software developers we are providing a service to customers. Because of this we cannot wait for inspiration to strike before we start working on a project. We should still do satisfactory work even if the project is not something, we are passionate about. This is a profession not a hobby.

Something that was not touched on here is often times the most beautiful solution in programming is also the most functional and the most efficient. Programming is a very function driven field. Unlike painting or sculpting where the greatest pieces can also be the most abstract, programming rewards the code that is the cleanest and the most functional. It is unlikely for a programmer to create something that is more beautiful than useful, especially in a professional setting. I could definitely see it happening though in a personal project or in a field like game development.

The most important part I took away from this pattern is the need for a minimal level of quality. I typically focus on getting the job done as fast as possible. I need to learn to maintain a certain internal standard. Quality takes time and I must work on taking the time to do it right. In the future customers will not be as forgiving or provide as precise instructions as professors. I have to be prepared to solve real problems for real people that do not have solutions already.

From the blog CS@Worcester – Half-Cooked Coding by alexmle1999 and used with permission of the author. All other rights reserved by the author.

Sustainable Motivations – Apprenticeship Pattern

What this apprenticeship dives deep into is that along your journey of becoming a software craftsman, there will be many times you face trials and tribulations. There will be instances where you are burdened with working on a complex project and forced to solve problems you have no idea where to even start. I am sure most of us on our journeys have faced this pressure and this feeling of whether this is all worth doing or whether we are cut out for it. However, this pattern says that we need to have clear and strong motivations when these trials come to our front door. Many people have different motivations as well as altering goals and ambitions. We are all developing software and programming for various reasons as clearly defining these things help us moving forward. We wouldn’t have made it this far if it wasn’t for some motivation that kept us going.

I think this chapter is very relevant for all of us who are trying to become software engineers and architects, and to understand that this journey isn’t just some smooth sailing. There will be times where everything feels easy and you feel lucky to even be in these circumstances. However, there will be other times that bring either the best or worst out of us when we face hard problems related to programming which can mess with us mentally. As a result of this pressure, we need to keep our road and ambition clear on where we want to be heading. Our journey is unique but to keep the journey going we need some strong interior purpose and motivation to pick up our head and keep moving forward on those days when we feel like there is no purpose for doing this. It is at these times, our mind is fogged up with the current problem and not on the bigger picture as to why we are doing all this in the first place. To clear up that fog we need sustainable motivations to be our anchor and help us get through and keep the boat moving. Overall, software architects will need to develop the mindset of believing in themselves and know they are doing what is right for them if they have a clear vision.

From the blog CS@Worcester – Roller Coaster Coding Journey by fbaig34 and used with permission of the author. All other rights reserved by the author.