Docker in a nutshell

The blog post starts by identifying what are the different teams in software development and what each team does or is responsible for. It also talks about how what the development team does might be counterintuitive to what the DevOps team does and create problems for the two teams. An example of such a case would be when the development team adds a new feature because the new feature might affect the stability of the code by breaking the code which leads to problems for the DevOps team. The blog post also talks about what is Docker and containerization. It defines both terms and lists the benefits and features of both. It also talks about how Docker and containerization can solve problems that arose when we use virtual machines. For example, Docker can have faster build and testing times. Then, the blog identifies and talks about the different parts of the Docker architecture. In the post, it identified that Docker is composed of four components: images, containers, registries, and the Docker engine. The post defines each term and what each one does. The rest of the post talks about the components of the Docker Engine (such as the daemon, how to submit requests to said daemon…etc.) and why we should use docker, what are its benefits and what are its alternatives.

I personally really like this blog post made by the BNC Software company because the article is not too long or difficult to read. Personally, I have always had a terrible memory and been the kind of person where if I read something I can form some sort of an understanding about something, but I won’t completely understand or remember the topic until I get a formal definition. So, I chose this blog post to look at because I think this post does just that for me. The post explains the terminology in almost layman terms which makes it easier for me to remember and understand the material. Another reason why I chose this particular post is because I think this blog post explains the basics of Docker very well and is a great resource for us to have as we start moving on to more advanced topics in Docker. While reading this post, I think it brought up an interesting point about how the development team can interfere with the work of the people on the DevOps team. This was a question that I had before but never really thought about or put in the time to research this topic. It reminds me of what we learned in Software Process Management and LeBlanc’s Law. Starting up a Docker container takes a lot of initial effort and time commitment, but it would save us time in the long run because we would run into fewer problems of incompatible code and run less of a risk disrupting the stability of the code.

From the blog CS@Worcester – Just a Guy Passing By by Eric Nguyen and used with permission of the author. All other rights reserved by the author.

CS-343 Post #3

I wanted to read more about micro-services and monolith architecture after working on the recent activity with them and comparing and contrasting the two. They both have their benefits and have similarities, but there are important differences that make people question which is the better choice for designs that they are working on.

Monolith architecture has one database and is more contained, but is more risky to change and update because of how reliant it is on database. Microservices has two databases and has less risk because there is less reliance on each database and one that is getting changed will only affect up to half the system, while monolith risks the whole system being affected. Monolith is faster because it uses one database instead of two and less overall components, but micro-services is safer and better for larger or more complex systems. Microservices seems the better choice out of the two, but monolith has its benefits that should be kept in mind.

I wanted to look more into the two architectures and found an article “Microservices vs Monolith: which architecture is the best choice?” from N-iX, that went over the two and did more comparisons. Monolith was described as the default model for a while, but microservices was getting more traction and monolith is less used. The purpose of the article was to go over the two and decide if monolith is still useful and what it could still be used for.

More comparisons between the two were talked about as the article went over the strengths and weaknesses of both. The pros and cons that were already talked were mentioned, such as the risk of changing components in monolith and how microservices is better for more complex systems, but there were some that were either not mentioned or briefly talked about in the activity that show up in the article that makes the debate between the two architectures more even.

While monolith is more risky, it is more simple to work with and has less overall components to work with, which can make it effective for less complicated systems. It was also said that it is easier to debug and test monolith because it works as a single unit while microservices is multiple. Microservices is better with complex systems and can be easier to understand because of the division of the architecture with the multiple databases, and there is less risk with changes. Flaws of microservices are that it can get too complex, testing is more difficult, and has more cross-cutting issues.

In conclusion, monolith does still have its benefits and is better than microservices for simple and smaller systems, while microservices is better for complex and larger systems but would need more experience to use it efficiently.

https://www.n-ix.com/microservices-vs-monolith-which-architecture-best-choice-your-business/

From the blog Jeffery Neal's Blog by jneal44 and used with permission of the author. All other rights reserved by the author.

Where Programming, AI and Cloud are headed in 2021?

This study is based on title usage on O’Reilly online learning. The data includes all usage of O’reilly platform, not just content that O’Reilly has published, and certainly not just books. I have included search data in the graphs.

Programming languages:

In the first figure, you can see year-over-year growth in usage, and the number of search queries for several popular languages. The top programming languages according to O’reilly are Python (up 27%), Java (down 3%), C++ (up 10%), C (up 12%), and JavaScript (up 40%). Looking at 2020 usage rather than year-over-year changes, it’s surprising to see JavaScript so far behind Python and Java. (JavaScript usage is 20% of Python’s, and 33% of Java’s).

Past the top five languages, Go has grown 16% and Rust has grown 94%. Go has clearly established itself, particularly as a language for concurrent programming, and Rust is likely to establish itself for “system programming”

Figure 2 shows what happens when you add the use of content about Python, Java, and JavaScript to the most important frameworks for those languages.

Adding usage and search query data for Spring (up 7%) reverses Java’s apparent decline (net-zero growth). Looking further at JavaScript, if you add in usage for the most popular frameworks (React, Angular, and Node.js), JavaScript usage on O’Reilly online learning rises to 50% of Python’s, only slightly behind Java and its frameworks.

None of these top languages are going away, though their stock may rise or fall as fashions change and the software industry evolves.

We see several factors changing pro‐ gramming in significant ways:

Multi Paradigm languages:

Since last year, According to O’reilly there is a 14% increase in the use of content on functional programming. Object oriented programming is up even more than 29% growth as compared to functional programming since last year. Starting with Python 3.0 in 2008 and continuing with Java 8 in 2014, programming languages have added higher-order functions (lambdas) and other “functional” features. Several popular languages (including JavaScript and Go) have had functional features from the beginning. This trend started over 20 years ago (with the Standard Template Library for C++), and we expect it to continue.  

Concurrent programming:

Platform data for concurrency shows an 8% year-over-year increase.Java was the first widely used language to support concurrency as part of the language.Go, Rust, and most other modern languages have built-in support for concurrency. Concurrency has always been one of Python’s weaknesses.

Dynamic versus static typing:

The distinction between languages with dynamic typing (like Ruby and JavaScript) and statically typed languages (like Java and Go) is arguably more important than the distinction between functional and object-oriented languages. Python 3.5 added type hinting, and more recent versions have added additional static typing features. TypeScript, which adds static typing to JavaScript, is coming into its own (12% year-over-year increase).

Low-code and no-code computing: 

low-code is real and is bound to have an effect.Spreadsheets were the forerunner of low-code computing. When VisiCalc was first released in 1979, it enabled millions to do significant and important computation without learning a programming language. Democratization is an important trend in many areas of technology; it would be surprising if programming were any different.

AI, Machine Learning, and Data:

Healthy growth in artificial intelligence has continued: machine learning is up 14%, while AI is up 64%; data science is up 16%, and statistics is up 47%. While AI and machine learning are distinct concepts, there’s enough confusion about definitions that they’re frequently used interchangeably. We informally define machine learning as “the part of AI that works”; AI itself is more research oriented and aspirational. If you accept that definition, it’s not surprising that content about machine learning has seen the heaviest usage: it’s about taking research out of the lab and putting it into practice. It’s also not surprising that we see solid growth for AI, because that’s where bleeding-edge engineers are looking for new ideas to turn into machine learning.

It’s possible that AI (along with machine learning, data, big data, and all their fellow travelers) is descending into the trough of the hype cycle. We don’t think so, but we’re prepared to be wrong. As Ben Lorica has said (in conversation), many years of work will be needed to bring current research into commercial products. 

The future of AI and machine learning:

  • Many companies are placing significant bets on using AI to automate customer service. We’ve made great strides in our ability to synthesize speech, generate realistic answers, and search for solutions.
  •  We’ll see lots of tiny, embedded AI systems in everything from medical sensors to appliances to factory floors. Anyone interested in the future of technology should watch Pete Warden’s work on TinyML very carefully.
  • Natural language has been (and will continue to be) a big deal. GPT-3 has changed the world. We’ll see AI being used to create “fake news,” and we’ll find that AI gives us the best tools for detecting what’s fake and what isn’t.

Web Development:

Since the invention of HTML in the early 1990s, the first web servers, and the first browsers, the web has exploded (or degenerated) into a proliferation of platforms. Those platforms make web development infinitely more flexible: They make it possible to support a host of devices and screen sizes. They make it possible to build sophisticated applications that run in the browser. And with every new year, “desktop” applications look more old-fashioned.

The foundational technologies HTML, CSS, and JavaScript are all showing healthy growth in usage (22%, 46%, and 40%, respectively), though they’re behind the leading frameworks. We’ve already noted that JavaScript is one of the top programming languages—and the modern web platforms are nothing if not the apotheosis of JavaScript. Twenty-five years later, that’s no longer true: you can still “view source,” but all you’ll see is a lot of incomprehensible JavaScript. Ironically, just as other technologies are democratizing, web development is increasingly the domain of programmers.

Clouds of All Kinds:

It’s no surprise that the cloud is growing rapidly. Usage of content about the cloud is up 41% since last year. Usage of cloud titles that don’t mention a specific vendor (e.g., Amazon Web Services, Microsoft Azure, or Google Cloud) grew at an even faster rate (46%). Our customers don’t see the cloud through the lens of any single platform. We’re only at the beginning of cloud adoption; while most companies are using cloud services in some form, and many have moved significant business-critical applications and datasets to the cloud, we have a long way to go. If there’s one technology trend you need to be on top of, this is it.

The horse race between the leading cloud vendors, AWS, Azure, and Google Cloud, doesn’t present any surprises. Amazon is winning, even ahead of the generic “cloud”—but Microsoft and Google are catching up, and Amazon’s growth has stalled (only 5%). Use of content about Azure shows 136% growth—more than any of the competitors—while Google Cloud’s 84% growth is hardly shabby. When you dominate a market the way AWS dominates the cloud, there’s nowhere to go but down. But with the growth that Azure and Google Cloud are showing, Amazon’s dominance could be short-lived.

While our data shows very strong growth (41%) in usage for content about the cloud, it doesn’t show significant usage for terms like “multi cloud” and “hybrid cloud” or for specific hybrid cloud products like Google’s Anthos or Microsoft’s Azure Arc. These are new products, for which little content exists, so low usage isn’t surprising. But the usage of specific cloud technologies isn’t that important in this context. usage of all the cloud platforms is growing, particularly content that isn’t tied to any vendor. We also see that our corporate clients are using content that spans all the cloud vendors; it’s difficult to find anyone who’s looking at a single vendor.

Security and Privacy:

Security has always been a problematic discipline: defenders have to get thousands of things right, while an attacker only has to discover one mistake. And that mistake might have been made by a careless user rather than someone on the IT staff. 

CISSP content and training is 66% of general security content, with a slight (2%) decrease since 2019. Usage of content about the CompTIA Security+ certification is about 33% of general security, with a strong 58% increase. 

It’s disappointing that we see so little interest in content about privacy, including content about specific regulatory requirements such as GDPR. We don’t see heavy usage; we don’t see growth; we don’t even see significant numbers of search queries.

Work cited: 

Google

Where Programming, Ops, AI, and the Cloud are Headed in 2021 – O’Reilly (oreilly.com)

From the blog CS@Worcester blog – Syed Raza by syedraza089 and used with permission of the author. All other rights reserved by the author.

Docker compose

For the homework#3 of this week, I had a chance to practice on the docker compose. However, excepting for the little knowledge that I learned from the activity 10 in class, I had no clue what docker compose was. Therefore, I wanted to learn more about docker compose to get a better understanding and also learn how to write a docker compose file.

Docker Compose Tutorial: advanced Docker made simple is one of the resources that I find useful to help me get an overview about the docker compose, how to write an .yml file, and how to run the .yml file with docker-compose commands. Through the website, I know that docker compose is used to define and run multi-container Docker applications; and I also get an idea of what components should be included in a .yml file to create a docker container. Specifically, to create a container service, we need some entries like build, link, image, ports, volumes, command, environment, etc.; and the function of each entry is also described in detail on the website. For example, back to the activity 10 in class, now I am able to convert the docker command given in the below picture to an equivalent .yml file.

As shown in the above docker command, my .yml file will create a container named cache using the image named redis with the tag of 6.0.8. This container has internal port 6379 connecting to the external port 11000, and it also mounts two directories of ./redis.conf and /etc/redis.conf. So, the components that should be included in my .yml file are version, services, web, image, ports, and volumes.

First, version is to define which version of docker compose that I am using. Services is used to define all different containers that I want to create. Web is use to present the container name, in this case, I will replace web with cache. Image will represent which image is used to create the container (redis:6.0.8). Ports is used to map the external port with the internal port (11000:6379). Volume is used to mount the source directory, ./redis.conf, with the destination directory, /etc/redis.conf.

In short, in my opinion, I believe the website that I found is a good resource, because from it, I can find really useful information about docker compose, such as definition of docker compose, how to write .yml file, and docker-compose commands. Thanks to this resource, I am more familiar with .yml file and docker-compose commands. Especially, I used what I learned from this website to do my homework 3.

From the blog CS@Worcester – T's CSblog by tyahhhh and used with permission of the author. All other rights reserved by the author.

Containers vs. Virtual Machines and why they are important to modern computing

Most people in the Computer Science field have either heard of or directly interacted with containers or virtual machines. They are an important part of many aspects of modern computing; ranging from massive cloud servers to simply running multiple operating systems on your machine, there are many uses for these tools. While they may operate in a similar manner, there are situational benefits to using one over the other. Firstly however we should discuss what each of these tools are, how they work, and how they differ from each other.

Virtual Machines: A virtual machine at its simplest works like a normal computer, containing an operating system and any other apps/services that it has been configured with. The difference between a virtual machine and a physical computer is that a virtual machine is purely software that runs on top of the host computer’s hardware without interfering directly with the native operating system. This gives the user of virtual machines many different use cases, such as testing out new code in a separate environment, running different operating systems on top of the native OS, or even running a cloud based server with multiple different operating systems. Since virtual machines are just software, that makes them extremely portable, meaning that the user can easily transfer a virtual machine from one device to a completely different one without much hassle. One big downside to virtual machines is that they can become very large in size. Since each virtual machine uses its own OS image, runs its own apps/services it has similar space requirements to the native OS; and running multiple virtual machines on the same host machine only compounds this issue.

Containers: Similarly to a virtual machine, a container is also used to virtualize a system environment; however unlike a virtual machine, a container does not need its own image of an operating system as it shares the host machine’s operating system. This allows containers to take up much less space allowing more of them to be run on the same machine. This also makes them much faster, as containers can start up much quicker than virtual machines. Like virtual machines, containers can also be moved from one machine to another, as long as the new machine has the same OS that the old one did. This brings up the downside to containerization, which is that once the container is configured to the host OS, it’s mated to that OS and will not run with any other operating systems unless it is reconfigured from scratch. This is unfortunately the tradeoff that needs to happen to allow containers to be so much quicker and more efficient than virtual machines.

What are the use cases: Now that the similarities and differences between virtual machines and containers have been discussed, we can talk about the specific use cases that each of these tools shines in. For someone just looking to run a separate instance of an OS on their machine then a virtual machine will suffice. Although it is slower and larger than a container would be, it allows for much more versatility with the host machines OS and hardware. In more corporate or large scale settings, virtual machines can be used when multiple system configurations are needed to test a certain product. Instead of buying a new machine for each specific configuration, virtual machines can be used and configured as needed. Containers can also achieve a similar effect, although they would be stuck with the base OS configuration. Where containers really shine is in massive databases, where their smaller size and modularity allows them to perform the tasks they are designed for much more efficiently, and allows for many more containers to fit on one single server. Likewise, containers can also be used for splitting up complex applications throughout multiple containers, allowing developers to make changes to each individual application module without the need to reformat the entire application.

Sources:

https://azure.microsoft.com/en-us/overview/what-is-a-container/#why-containers

https://azure.microsoft.com/en-us/overview/what-is-a-virtual-machine/#what-benefits

https://docs.microsoft.com/en-us/virtualization/windowscontainers/about/containers-vs-vm

From the blog CS@Worcester – Sebastian's CS Blog by sserafin1 and used with permission of the author. All other rights reserved by the author.

Design Patterns

  • Creational
  • Structural
  • Behavioral

In what is called software engineering, a design model describes a solution that is determined by the problems found in the design of the software. It makes it possible to represent practices that are best and which evolve over a longer period of time through gaming but also testing that is done by software experienced developers.

1)Creational Design Patterns are patterns that explain how an object can be created. These models make it possible to reduce the complexity but also the instability that they have by making the creation of objects in a more controlled way.

In most cases, the new operator is often considered harmful as it distributes objects throughout the application. This can become even more challenging over time, in order to change the implementation, for the sole reason that the classes have a very close relationship with each other.

Creative design models address this issue here by making the client disconnect from the initialization process that is currently underway.

1. Singleton – Makes it possible to provide a single instance of an object that exists during an application.

2. Factory Method – It is possible to create different objects of different classes that are related, but without making specifications in the exact object that can be created.

3. Abstract Factory – Makes possible the creation of families that are dependent but also connected together.

4. Builder – Has the ability to build complex objects using one approach after another, or in other words step by step.

2) Structural design patterns– These are models that have to do with what is called the organization of objects and classes that are different, but also that change the formation of structures from the largest, but at the same time to make it possible to provide the functionality of ri.

Explain how the structural design model is used-

At the moment when we have two interfaces that have an incompatibility with each other but also that want to have a relationship between them by having adaptation with each other, it is also called as the adapter design model. This model, also known as the adapter model, converts an interface of a class to an interface or another class that the client expects. In other words, what is called adaptability somehow allows the classes to work together, which at another point due to incompatibility could not have happened otherwise.

3)Behavioral design patterns represent the responsibility but also the interaction of different objects.

By generalizing these design models, the interaction that takes place through objects should be in such a way that they too can talk smoothly to each other.

Translated into simpler words, in order to avoid dependencies but also difficult coding, the application and the client at the same time should be connected as loosely as possible.

Behavioral design models have 12 types usable for this model:

1. Enterprise chain model

2.Command model

3.Performer model

4.Iterator Models

5.Broker model

6.The model of memories

7.Observer model

8.The state model

9.Strategy Model

10.Template model

11.Visitor model

12.Object Null

References:

https://www.gofpatterns.com/design-patterns/module2/three-types-design-patterns.php

https://howtodoinjava.com/design-patterns/creational/

https://howtodoinjava.com/design-patterns/structural/

https://www.ifourtechnolab.com/blog/a-complete-guide-on-behavioral-design-pattern-and-chain-of-responsibility-pattern-in-java

From the blog CS@worcester – Xhulja's Blogs by xmurati and used with permission of the author. All other rights reserved by the author.

UML Diagramming

Unified Modeling Language otherwise known as UML is one of many different modeling languages/types used in programming. The purpose of modeling languages such as UML is to show the structure of code in order to better understand it. Widespread use of UML started in 1997 when it was adopted as a “standard”. UML is used with object oriented programming and is separated into two different categories (Structural and Behavioral Diagrams). Structural Diagrams show static aspects of code while Behavior diagrams show the dynamic parts of code. 

Some concepts listed that are a big part of UML are Class, objects, inheritance, abstraction, encapsulation, and polymorphism; all of these are directly related to object oriented programming. A few types of structural uml diagrams are class diagrams which show static structure, composite diagrams which show internal code structure and show how the code being analyzed acts when it comes to the rest of the program, an object diagram shows what instances are in code and their relations/associations, component diagrams show the organization of a systems physical components. Some Behavior diagrams from uml are state machine diagrams which represent the “state” of the system, Activity diagrams show a systems control, sequence diagrams show interactions between the objects. 

I chose this article as we have worked on UML diagrams in class and this article is structured well and tells the readers what exactly UML is without overloading them with information. Readers are presented with what UML is and different ways it is used to show many different things involved in object oriented programming. The site which the article is on also has multiple other articles which show readers more complex aspects of UML so if they want to dive deeper into the subject they are free and able to do so without having to look for another website for the desired information. 

I liked this article because of its simple format and feel that it helped me when working on assignments regarding uml even though we mainly went over class diagrams knowing the other types of diagrams and what they are used for was interesting and even helpful to me in understanding the significance of class diagrams. Each diagram in UML has a “specialty” you could say as each different diagram type maps out something different and even with class diagrams being rather simple they can help you understand the benefits that UML provides for software based fields.

https://www.geeksforgeeks.org/unified-modeling-language-uml-introduction/?ref=lbp

From the blog CS@Worcester – Dylan Brown Computer Science by dylanbrowncs and used with permission of the author. All other rights reserved by the author.

Design Smells

In class several weeks ago we began a discussion on design smells. Design smells, or code smells, are qualities of a code that may indicate a greater flaw in its design. I had not heard this term before, so I wanted to spend this blog post familiarizing myself with it. Luckily, there are plenty of resources available to help me.

Hiroo Kato examines this topic in great detail in a blog post “Code Smells and 5 Things You Need To Know To Achieve Cleaner Code.” This article is a great resource for anyone looking to learn more about design smells.

Kato discusses several common types of design smells: bloaters, object orientation abusers, change preventers, dispensables, and couplers. Bloaters, as is implied by the name, are sections of code that have grown too large to be efficiently worked with. Object orientation abusers refer to improper implementation of “object-oriented programming principles.” Change preventers complicate the development process by necessitating more involved changes to the code. Dispensables refer to elements of a code that are unnecessary, and couplers refer to “excessive coupling between classes.”

Design smells can occur at various levels of a program. These levels include application level smells (e.g. duplicate code, shotgun surgery), class level smells (e.g. large class, little class), and method level smells (e.g. long method, speculative generality).

Fixing these design smells as they are noticed is important to keep a project clean and efficient. Kato suggests implementing a code review process. Ideally, this would involve going through the code, identifying design smells (either by hand or by using an automated detection tool), and refactoring them. The way you wind up refactoring your code will depend on the design smell it is afflicted with.

I chose this resource because I think it offers valuable information. The article is well researched, citing studies where relevant to validate claims. The article was also published very recently, so the information is definitely relevant. The author, Hiroo Kato, seems reputable, having written for the article’s host website for over five years. I was curious about this topic, and Hiroo Kato’s writing has added quite a bit to my understanding of design smells. I was not aware that there were types beyond the few we learned in class, or that there are automated tools available to help identify these smells. It’s really useful to be able to identify and avoid or fix design smells, and I will be utilizing this information in future projects.

From the blog CS@Worcester – Ciampa's Computer Science Blog by graceciampa and used with permission of the author. All other rights reserved by the author.

Visual Studio Code Extensions

Visual Studio Code (VSC) is currently ranked the 3rd most popular IDE by programmers throughout the world [1] and is only eclipsed in usage by Eclipse (pun intended) and by the “full” version of Visual Studio. VSC is considered a lightweight IDE compared to Microsoft’s Visual Studio Enterprise (VSE). In the last 5 years, it has shot up in popularity and is expected to surpass the bigger weightier editors because of its expansive use of plugins. This architecture is groundbreaking in that it can remain compact and easy to use for simple tasks, but, when necessary, can be enhanced dramatically to deal with any level of expansion.

IDE’s can be ranked by size in 3 categories:

There are 3 levels of integrated development environments these days.

  • The smaller variety are enhanced text editors, (e.g., Atom, Sublime Text, and Notepad++).
  • Medium sized, like VSC, are the up-and-coming editors of the future
  • Larger development environments, (e.g., Visual Studio Enterprise, Eclipse and IntelliJ IDEA and editors based on it (like Android Studio))

The great advantage I see with VSC is its ability to be lightweight out of the box, but with the addition of the right plugins, can support most languages, tools, and coding situations.

I have worked over the years rather extensively with VSE, Eclipse, and Android Studio. Eclipse is a good editor, but it is a toy compared with VSE and Android Studio. All 3 of these IDE’s (formerly and pretentiously termed VDE’s) were powerful, but cumbersome. If you wanted to write a simple framework, you had to stumble through a plethora of menu entries, toolbars, and tabs to find what you needed. What you wanted was there, but sometimes it’s location was  not obvious. Developers did their best to memorize or use cheat-sheets of shortcut key combinations to get optimal use out of their coding time but were limited with a lot of minutiae to wade through.

As an example, if I primarily use java, but sometimes write JavaScript code, I only install the plugins related to those languages.  If I want to use Android Studio key-mapping for my shortcuts, I install that one extension. If I want to make my code align automatically, I install “Prettier”. If I want to see my nested code more clearly and visually interesting, I install “indent-rainbow”.

These extensions can be removed or disabled quite easily. VSC’s activity bar (by default on the left view), will add items to the bar as extensions are installed. I have additions for Docker, Testing, and source-code integration that were added by installing the relevant extensions,  

On my next post, I will lay out a list of the extensions I am finding most useful as a student of a Software Architecture class I am taking. We have used Plantuml [2], Docker [3], Object Oriented design, Design Patterns in Java, and semantic versioning for Docker images. We will be proceeding to use REST API’s, Kubernetes, CI/CD, and microservices architectures. As these new subjects are encountered, it is easy to just snap in the appropriate extensions, and more ahead from there. Beautiful!

1. https://pypl.github.io./IDE/html

2. https://plantuml.com/

3. https://www.docker.com

From the blog cs@worcester – (Twinstar Blogland) by Joe Barry and used with permission of the author. All other rights reserved by the author.

Docker, and why it is so important

Docker is a relatively new way of software delivery. With Docker, all that is needed is the image for the dependencies and software needed, and Docker does the rest. This is huge as before you would need to download and install specific versions of software on each and every computer you wanted to work on, but with Docker, you say you want X, Y, and Z and Docker does the rest for you, making life much easier.

The way Docker does this is through what are called “Containers”. Containers allow developers to package an application with everything that it needs, like dependencies and libraries, and then ship it out as one package. This is where the magic happens, as it means that the application will run on any other machine regardless of any customized settings that machine may have that might differ from the main machine that is being used for writing and testing the application. Docker is, in a way, like a virtual machine in this sense, but instead of running an entire virtual operating system, Docker allows applications to run the same kernel as the system they are running on, and only needs applications to be shipped with things not already on the host computer, instead of having to send out everything. This translates to a big performance increase in the speed of the application and reduces the overall size of the application as well.

Docker is also open source, one of the most important things, in my opinion, software can be these days. This means that anyone can contribute to Docker and modify it to their own needs if they need something Docker does not provide by default. This also means that since anyone can modify it, they can also see the inner workings and ensure that nothing is going on that’s not supposed to, like recording private information to sell to advertisers, for example. Docker makes life easy for developers, allowing them to not have to worry about the system that their program will be running on. Docker also gives flexibility in it’s containers and can reduce the number of systems needed for the program to run.

Source

I choose the above article because it gave a good look into what Docker is and how it is useful for software development in today’s world. It explained the key aspects of Docker and what they are useful for compared to past methods. In addition, the article gave a good analogy to Docker by comparing transporting cargo prior to the standardization of using shipping containers to make transporting good easier, as that roughly translates to what Docker does with their containers.

From the blog CS@Worcester – Erockwood Blog by erockwood and used with permission of the author. All other rights reserved by the author.