Category Archives: Week 5

Blog Discovery

Why do we use Docker?

I decided to choose this article because it goes in-depth on the way we use docker in real life and how efficient it can be when it comes to “virtualization” in a sense, which allows it to run smoother. Plus, BMC blogs are filled with potential reads about software development, implementation, methods, etc. filled with blogs about other project they have accomplished to supply on  how we are already learning the basics on docker and command lines, this information on how docker works will be used in my class for future projects will give not just give me insights but other students that have a need to gather an understanding on how to operate docker and its components and valuable resources.

What is Docker exactly?

Docker is an open-sourced platform that is Linux based towards virtualization but using containers to build, run, test and packages applications with pure efficiency without losing its integrity because it doesn’t rely on the computer’s hardware but on the OS (Docker Engine). Docker is broken down into different elements that helps it run a bit smoother than virtualization, which are containers, images, registries, docker file, and Docker engine. Docker image are set instructions that make up the containers and processes on how to run the application. To run the images, we have the docker engine that helps maintain the virtualization said container on the host machine. If docker is installed it can be applied the same when it comes to container since they run the same. For example, when we want to build a website and add in a web server along with a database such as MySQL, we can simply make an image for the MySQL and give it instruction to the specific port you wish. then package the web server in a container to run the images you already preconfigured and its dependencies, so if you need more servers to be added, easily deployable and easy to migrate said containers to new server. 

I have no prior experience with docker but have a great understanding on how it can be useful in software development and how it can used efficiently especially when compared to cost and using VMs. I could have used this practice in combine with my database class of when I built a MySQL sever and could’ve practice my database with a docker container and saw first handedly how to implement and observe the inner workings of composing it into a container to save the hassle if I wanted to build a web server or application entirely on the docker services. 

Blog resource link: https://www.bmc.com/blogs/docker-101-introduction/

From the blog cs@worcester – Dahwal Dev by Dahwal Charles and used with permission of the author. All other rights reserved by the author.

DevOps With Docker

This week, as we have begun to use Docker and explore how to use it further, I thought it would be a good time to look further into what Docker is and why it is used professionally. In doing so, I found a relatively short blog post by Sudip Sengupta called Introduction To Docker: A Beginner’s Guide that I think does a pretty good job of explaining the positives of using Docker as a development tool.

The post begins by covering why a lot of companies are switching to a containerized framework for development. Mostly, they explain, it is due to the ease of use. It allows for reduced complexity and vulnerability, and generally makes the development process more resilient to the introduction of bugs that are introduced by developers using different dependencies, or using different versions of different software dependencies. If developers are using Docker, they have a consistent container that is completely independent of what they have installed on their own system. So there is no variation in how a build will go, and no bugs can be introduced in the build process, making both building and testing more stable. The post also contains a brief but helpful explanation of how Docker actually functions. A customized docker image can be used to tailor instances of the container to use what is needed for development, and allow for a more modular work environment, as everything needed is stored in the image.

I chose this post because it felt like a pretty good introduction to what Docker is, how it works, and why it is being used more and more in professional software development. From my own experience using Docker so far, it seems like an extremely useful tool. There is no longer a need to have Java installed on my system just for software development, I don’t have to worry as much about what versions I have installed, or have to worry about having multiple versions that can introduce issues into my development. There just seem to be so many perks to using containerization, especially as part of a development team. After the initial setup of getting Docker to work, all the dependencies of your code are just stored in an image that can be used by everyone on the dev team. There is no longer a need to worry about somebody having an out-of-date version of something that can break the code, or cause inconsistent testing results. I will definitely continue to use Docker in the future, it just seems like an invaluable tool for any kind of software development, either personal or professional. And the amount of development tools that are made for Docker or can interact with Docker makes it even more useful.


Source: https://www.bmc.com/blogs/docker-101-introduction/

From the blog CS@Worcester – Kurt Maiser's Coding Blog by kmaiser and used with permission of the author. All other rights reserved by the author.

 GRASP

General Responsibility Assignment Software Patterns – is composed of various instructions which make the definition of classes but also of objects in the design that are oriented by different objects. GRASP has 9 different principles and models, each of which presents as a start the problems and solutions they have:

Information expert – What is the basic principle from which the responsibility for objects is determined?

                                    -We have to assign a class responsibility that has

                                     necessary information in order to fulfill it

In this example, the customer class carries references to all the messages that customers have. In this way, taking responsibility from the candidate to do the calculation of the total value of the various orders, comes gradually naturally. For this reason, this is one of the main principles, but also for the fact that if we do not have all the data we need, at that time we would not be able to meet the requirements and determine the responsibility.

Creator Who creates object A?

                We need to define class B in order to create object A (or e

                in other words, B contains the data of A, records them, uses them closely, and with

                most importantly I have the initializing data that A) has.

This model tries to help in order to be able to decide which class should be responsible for creating a new example of a certain class. In principle, creating an object as a whole is one of the most important processes, and it is important to have a principle for deciding who should have the opportunity to create a possible class example.

Controller – Who is the first object to be used beyond the UI layer which receives and

                      coordinates the control that a system operation performs?

                    – The responsibility of an object which represents: “system” e

                     generally represents the “root object” as well as a usage scenario from

                     where the operation of the system is possible.

The high-level design of our system creates dependence on the implementation of this principle. However, we always have to define the object in order to process the business transactions that we have.

Low Merger – How do we enable the impact of change? How to make it possible to

                             do we support low dependency but also increased reuse?

                          -To determine the responsibility that belongs to us so that the union (Which in this case is

                            Unnecessary, to remain the same Low.

The relation is basically about the mass, how one element relates to another. In this way, the higher the union is, the greater the dependence that one element has on another.

High cohesion – The way we keep objects focused, manageable,

                                 understandable, but also as a side role to support the low link?

                                -We have to assign a certain responsibility, for the sole reason that cohesion

                                 to remain high again.

The definition of cohesion is that it has the role of a measure of how tightly all the responsibilities that the element has can be related. In general, classes that have low cohesion, have within the data that are unrelated, or that have unrelated behaviors.

Indirect – The place from where responsibility should be assigned to avoid merging in order

                   direct between two or even more things.

                 – A responsibility must be assigned to an object that is intermediate, from where

                  mediation of services or other components is done, for the sole reason that they

                  not have a direct connection.

At this moment, the position of the mediator model enters the game. Another role that Indirection has is that it supports low fusion, but at the same time reduces all readability and reasoning to be used for the whole system as a whole.

Polymorphism – How are alternatives that are based on their type treated?

                         -At the moment when the alternatives or even the behaviors which are

                          interrelated    change

                       based on the type of class, determine the responsibility we should have for

                       behavior towards the species from where the behavior also changes, using in this way

                       also polymorphic operations.

Polymorphism is a term that indicates the basic design principle that is object-oriented. This principle has very strong links with what is otherwise called the Strategy Model.

Clean fabrication – Which object should have more responsibility, when we need it to violate the

                                  High   Cohesion but also the Low League. But in addition to these, the solutions

                                  which are offered by other principles and which are not considered appropriate?

                                -We need to define a group that is responsible for a certain class

                                  artificial or even a suitable class that can not do the representation of

                                  a concept that has the domain of the problem.

In some cases, it is very difficult to understand the moment where we have to place the responsibility. This is the main reason why there is a Domain Service concept in Domain-driven Design. The logic behind these services is not directly related to the entities that are separate.

Protected variations – How to design subsystems, objects, and systems in

                                              these elements, not to be influenced by other elements?

                                             -How to identify points that have variations or

                                              even the instability that is predicted, must

                                              define the responsibility we need to create an interface that is with

                                              stable.

This principle is the most important and is indirectly related to the other principles that GRASP has. But we must always be ready for the demands that are constantly changing, as programmers that we are.

Refereces:

https://titanwolf.org/Network/Articles/Article?AID=c96c4845-28c5-46c8-ae48-

https://yamm.finance/wiki/GRASP_(Object_Oriented_Design).html

From the blog CS@worcester – Xhulja's Blogs by xmurati and used with permission of the author. All other rights reserved by the author.

What is Docker and why are we using it?

For the past few weeks in class, we have been working with something called Docker. I have been working on projects that used docker, and we recently did an activity on Docker commands. With all this work revolving around Docker, I wanted to familiarize myself with it further. I did some research on what Docker is, how it works, and why we use it. There are an abundance of sources and blogs that go in depth to how Docker works. That being said, this blog post will just relay most of that information, and you may find it useful if you have been confused about docker up until now.

Let’s first understand what Docker is. A very informative source that I found was an article by IBM that explains this topic very well. Docker is an open source platform that utilizes containerization to package applications, their dependencies and required operating systems into containers. This in turn allows software developers like us to write code and build applications no matter the environment. Though it took a bit to get set up, I found that it made the whole process of writing programs more convenient.

For our second homework assignment, to get the project running in Visual Studio Code, we needed to reopen the folder in a dev container. Docker revolves around the process of containerization, a variation of virtualization. When you hear the term virtualization, you may think of virtual machines, which is the process of emulating a physical machine, virtualizing the OS, underlying hardware, the application and their dependencies. Containers on the other hand virtualize the OS and only the application and their dependencies. As a result, containers offer more portability because “unlike a virtual machine, containers do not need to include a guest OS in every instance and can, instead, simply leverage the features and resources of the host OS” as stated in another article by IBM.

Now that we have a better understanding of how containers differ from virtual machines. I just wanted to conclude by listing the benefits of using Docker and containers. IBM mentions that containers are more lightweight. I have definitely noticed the difference in system usage between running a virtual machine and just running Docker. Another benefit I have seen is the increase in development efficiency, especially for the second homework assignment, where we were required to run the code against tests several times as we made changes. Overall, I found that writing this blog post helped me get a better understanding of what Docker is, how containers work and their benefits to the software development process. It allowed me to weigh the pros and cons of using virtual machines as opposed to containers. And now I can understand why we are using Docker.

Sources:

https://www.ibm.com/cloud/learn/docker

https://www.ibm.com/cloud/blog/containers-vs-vms

From the blog CS@Worcester – Null Pointer by vrotimmy and used with permission of the author. All other rights reserved by the author.

Strategy Pattern Design Article

https://refactoring.guru/design-patterns/strategy

This article shows the “intent” of the strategy design pattern and how to successfully use it when refactoring code which is our current class topic. According to the article the strategy pattern (a behavioral design pattern in CS helps define a group of algorithms in separate classes which will effectively use each other’s objects.

One of the larger issues addressed by using the strategy pattern helps keep the main class from growing into a more complex mess. The strategy pattern allows for a developer to take/extract a class that has different functions and funnel them into a new class which is what this pattern refers to as a strategy.

Interfaces are commonly used with the strategy pattern in order to communicate with the other “strategies” that you previously extracted from the main class. The use of interfaces in the strategy pattern also allows your code to switch between algorithms at run time by using sub objects that perform their own tasks.

I chose this article as I feel it gives readers like myself a good general understanding of how to use and implement the strategy pattern. The article effectively shows you when you should or should not use the strategy pattern along with the pros and cons of using it. Similarities between the strategy pattern and other patterns it also outlined in the article which helps you tell the patterns apart and when you should use one pattern over another.

This article from refactoring guru has helped me to better understand the strategy pattern as a whole and helped me gain a somewhat smaller but still important understanding of some aspects of other design patterns (for example command and state). There are also examples of the strategy pattern being used in different coding languages found under the “Relations with Other Patterns” section.  I plan to use the information in this article to help me understand the Strategy pattern more in current and future assignments as well as in my professional career whenever I may need to refactor code through the different design patterns.

Overall I believe the general understanding of the strategy pattern gained from this article can help myself and any other student that may be having trouble with the topic or even someone who is just curious and would like to learn more about design patterns as their are articles on the other design patterns on the same website that can be accessed easily through links near the end of the page.

From the blog CS@Worcester – Dylan Brown Computer Science by dylanbrowncs and used with permission of the author. All other rights reserved by the author.

Software Construction Log #2 – Learning about Containerization and Virtualization

            In my experience, the concept of virtualization is currently synonymous with the creation of virtual machines that are used to emulate hardware and operating systems that, for one reason or another, are not readily available during the process of software development. For example, during my studies I have needed to use programs that were not available on Windows operating systems or to study an operating system for which a physical unit may not be readily available. Whatever the case may be, virtualization is not a new concept, and it is widely utilized in software development. It is important to note that virtualization is not exclusive to virtual machines, as it has a broader range that includes any concept related to the abstraction of a system’s physical components into virtual components.

            Among others, one concept of virtualization is containerization, with which most of us are familiar through Docker. Conversely, containerization refers to the containment of applications, their dependencies, and their required operating system into a singular package, also called container (hence, the name) that can be deployed and used on any operating system. By design, containers are meant to be a portable and lightweight way of testing and deploying applications, at least when compared to virtual machines. However, it is important to note that singular instances of containers cannot be modified, whereas it is possible to customize and modify virtual machines. Despite the caveats or benefits of virtual machines and containers, both are equally important during development.

          As I mentioned before, I have used virtual machines during my studies to create and use servers that I had no immediate physical access to, so the concept of virtual machines is not entirely foreign to me. However, I have little experience by comparison when it comes to Docker and using containers for software development, so I believe it is important for me to understand their differences so that I can know how to properly utilize them. As I was researching more regarding the concepts of virtualization and containerization, I came across the post titled What’s the Diff: VMs vs Containers on BackBlaze.Com, in which Roderick Bauer defines what virtual machines and containers are in detail, how they are different based on their structure on a server, as well as listing their benefits and best uses. Though Bauer does not directly state their caveats, by looking at the differences of both virtualization and containerization I can better understand when either approach could be more suitable depending on the needs during development.

            Moreover, what this post also helped me understand better is that neither option is mutually exclusive; it is possible (and sometimes, even preferable) to utilize both virtualization and containerization during development, rather than being limited to either option, so long as doing so does contribute to improving development.

Direct link to the resource referenced in the post: https://www.backblaze.com/blog/vm-vs-containers/

Recommended materials/resources reviewed related to virtualization, virtual machines, and Docker/containerization:
1) https://www.oracle.com/cloud-native/container-registry/what-is-docker/
2) https://www.infoworld.com/article/3204171/what-is-docker-the-spark-for-the-container-revolution.html
3) https://www.docker.com/resources/what-container
4) https://devopscon.io/blog/docker/docker-vs-virtual-machine-where-are-the-differences/
5) https://www.airpair.com/docker/posts/8-proven-real-world-ways-to-use-docker
6) https://opensource.com/resources/virtualization
7) https://en.wikipedia.org/wiki/Virtualization (Definition of Virtualization)
8) https://www.ibm.com/cloud/learn/containerization

From the blog CS@Worcester – CompSci Log by sohoda and used with permission of the author. All other rights reserved by the author.

Decision Table – Based Testing

For the fourth assignment in software testing, we had to create a decision table testing. A decision table is a black-box test technique that visually presents combinations of inputs and outputs, where inputs are conditions or cases, and outputs are actions or effects. A full decision table contains all combinations of conditions and actions. Additionally, it shows the causes and effects. Therefore, this technique is also called a cause-effect table. A well-created decision table can help to sort out the right response of the system, depending on the input data, as it should include all conditions. It simplifies designing the logic and thus improves the development and testing of our product. 

The decision table works on input conditions and actions. We create a Table in which the top rows are input conditions, and in the same vein, the bottom rows are resulting actions. Similarly, the columns correspond to unique combinations of these conditions.

Advantages of Decision Table Testing

The system behavior is different for different input, both equivalent partitioning, and boundary value analysis won’t help, but decision table can be used.

It can be easily interpreted and is used for development and business as well.

Help to make effective combinations and better coverage for testing.

Any complex business conditions can be easily turned into decision tables.

In a case we are going for 100% coverage typically when the input combinations are low, this technique can ensure the coverage.

Disadvantages of Decision Table Testing

The number of inputs increases the table will become more complex.

Resources:

https://www.guru99.com/decision-table-testing.html

From the blog CS@Worcester – Tech, Guaranteed by mshkurti and used with permission of the author. All other rights reserved by the author.

Dig Deeper

For this week I want to talk about the Dig Deeper apprenticeship pattern. This is a pattern that everyone in computer science understand it well and doesn’t need to many words. The main point of this pattern is that no matter what tool you decide to explore, learn to dig deep into it. Acquire the depths of knowledge to the point that you know why things are the way they are.

In today’s world there are so many tools that is so hard to keep track with. I’m sure anyone can learn a tool or language if they have enough time to explore and understand it. If that’s not the case then you only ever learn the parts of a technology that you need to get your portion of the system working, and you depend on other members of the team to learn the other parts. So, with what you end up is a superficial knowledge of a thousand tools and you’re not even aware of how little you know until something or someone puts you to the test.

Another thing that I like, and I don’t like at the same time is depend on other members of the team to learn other parts. There are serious students or coworkers who will do their job but also, we have the other side of coin. There are people who don’t take it seriously, so you end up doing all the work by yourself or have misunderstanding. Solution to this is to hope you have time to learn and explore more of the tool and finger crossed to have someone driven to learn new things working with you.

Another advantage of digging deep into a technology is that you can actually explain what’s going on beneath the surface of the systems you work on. The book explains how in interviews, this understanding will distinguish you from other candidates who can’t describe the software they’ve helped build in a meaningful way because all they understand is one little portion. Once you’re part of a team, it’s the application of this pattern that separates out those who are making random piles of rubble. So, you don’t have to fake it till you make it. If you invest your free time to learn something new you going to add more points to yourself.

References:
Apprenticeship Patterns by Dave Hoover; Adewale Oshineye

From the blog CS@Worcester – Tech, Guaranteed by mshkurti and used with permission of the author. All other rights reserved by the author.

Record What You Learn – Apprenticeship Pattern

In this post I will be discussing the apprenticeship pattern, “Record What You Learn” written by Adewale Oshineye and Dave Hoover in the book Apprenticeship Patterns: Guidance for the Aspiring Software Craftsman, 2009. This pattern is design towards people who end up learning the same lessons over and over again or going through the same experiences or failures but can never get the details or reasons to stick.

The solution suggested is to create a blog, notebook, or wiki that you can treat as a journal and record important things that you learn. This is not to write down and forget about. Throughout your career you should return to this journal to review it and make new connections in your memory. When reviewing you should update what is written as more knowledge and experience is accumulated over time. The authors even suggest creating two blogs, one a public record and one a private. This allows you to share the lessons you have learned and also get feedback on what you have written with the public record and be able to be brutally honest with yourself in the private record. Internal and external feedback allows you to have an honest and accurate self-assessment. The main goal of this pattern is to keep a journal of your path to mastery so that you can reflect on and learn in the future.

I found this particular apprenticeship pattern interesting because this morning I was thinking about starting a notebook in which I can keep any important lessons as well as important details. This way I would be able to look back on it frequently and grow as a developer. This could include important concepts, design patterns or failures that I can learn from as I move forward. This book has provided a useful structure in which I can follow and also has inspired me to follow through with it since I have said I was going to do it a few times already but never have. Also, it has given the idea of two journals in which one can be private and one public. Maybe also different journals for different topics such as one for design patterns, one for lessons learned, one for important topics, etc.

Hoover, D. H., & Oshineye, A. (2010). Apprenticeship patterns: Guidance for the aspiring software craftsman. Sebastopol, CA: O’Reilly.

From the blog CS@Worcester – Austins CS Site by Austin Engel and used with permission of the author. All other rights reserved by the author.

Your First Language

I always feel that if you are better at the first language, the easier it is to learn the next one.

I learned Java in my sophomore year, and then I went to learn C, which felt easy. Because the logic of computer language is interchangeable. For example, if you learn English well, you will find some similarities between French and Spanish. Although French and Spanish maybe your second language, you will learn them much faster than non-proficient English learners. I think this applies to computer language learning as well.

Each language gives you the opportunity to use different patterns to solve problems. In the process of moving beyond your first language, you should look for opportunities to learn languages that approach problems in very different ways. Apprentices who are comfortable with object-oriented languages should explore Functional’s programming language. Students of dynamic typing should delve into static typing. Apprentices comfortable with server-side programming should take a look at user interface design.

You should not be “married” to any particular technology but should have a broad enough technical background and experience base to be able to choose the right solution for a particular situation.

Many people say Java is good because it is suitable for many kinds of software programming. Some people also say that C++ is good because its language is more advanced than Java; there are also people who have learned that learning C++ to learn Java or very simple. I personally hate to talk about what language is best, every use situation has a language that works best for it. Or if you have learned your first language well, mastering it is also a good option. But there are certain situations where C really has the best solution than Java, so we write our software in C. At this point, there is no need to stubbornly think that I am good at Java and I have to use the language I am good at to solve this problem. 

The spirit of craftsmanship is that you strive for the best in what you are good at, but in certain situations, we can’t stick to the rules. Modern society is a utilitarian society, we need to maintain the spirit of artisans while learning to adapt to the society.

From the blog haorusong by and used with permission of the author. All other rights reserved by the author.