Category Archives: Week 5

API

What really is an API?

We’ve all heard of APIs, but what are they exactly? To put it another way, imagine you’re seeking for a method to express yourself in a unique way. Our thoughts, wants, and ideas may all be expressed through language (both written and spoken), gestures, and facial expressions. For interaction with computers, applications, and websites, user interface components such as a screen with a menu and graphical elements, a keyboard, and a mouse are necessary. Software and its components do not require a graphical user interface to communicate with one another. APIs (application programming interfaces) are machine-readable interfaces that allow software products to exchange data and functionality.

Rest API

Roy Fielding first mentioned REST in his doctorate dissertation in 2000. It’s a collection of architectural elements, design concepts, and interactions for constructing distributed systems that employ any type of media (text, video, etc.). REST is a method of system development that provides for flexible transmission and presentation of information over the web while also giving the framework needed to quickly create general-purpose components. REST differs from other web services because it requires delivering data to the requesting application. While this gives the program a lot of freedom, allowing it to do anything it wants with the data, it comes at the expense of efficiency. Delivering data over the internet for processing is much slower than conducting the processing locally and then sending the results.

Private API

These application software interfaces should improve an organization’s solutions and services. In house developers or contractors can use these APIs to integrate a company’s IT systems or applications, as well as to create new systems or customer-facing apps that use current systems.

Partner API

Partner APIs are marketed freely but only shared with business partners that have signed a contract with the publisher. Software integration between two companies is a frequent use case for partner APIs. A firm that gives its partners access to data or capabilities might generate additional income streams. It can keep track of how the exposed digital assets are being used, ensuring that third-party solutions that use their APIs deliver a good user experience, and guarantee that corporate identity is maintained in their apps.

Public API

These APIs, often known as developer-facing or external APIs, are open to all third party developers. When correctly implemented, a public API program may increase brand recognition while also providing an extra source of revenue. There are two kinds of public APIs: open (free) and commercial (for a fee).

Why I chose this topic?

I have always wanted to learn more about APIs because I have used rest api before and I never really knew how it worked or what it was. I only followed the documentation about I’ve always wanted to understand more about APIs because I’ve used them previously but had no idea how they operated or what they were. I only used it after reading the documentation on how to do so. After doing this research, I have a basic understanding of APIs and their various kinds.

From the blog cs@worcester – Dream to Reality by tamusandesh99 and used with permission of the author. All other rights reserved by the author.

Strategy vs. Singleton

“Don’t ever leave your baby in the road.” – My First Computer Science Professor, 2015.

That came out of left field, didn’t it? However, the quote (much like YAGNI) makes a lot of sense in the world of Computer Science. Specifically, this quote refers to the idea that you should never “assume that you’ll always be careful and pay attention”, but rather you should prevent any mishaps from happening in the first place. Global variables, for the most part, should be avoided; the ability to be accessible from anywhere within the program is dangerous to say the least.

However, the “Singleton” method of refactoring seems to make an argument for the usage of global variables (provided that they still have a level of protection from modifying the data). Singleton refactoring works on the idea that having one “global” instance of a behavior class can save on memory; client classes using the same behavior can reference that single class. However, due to certain features (for example, making the constructor a private method within the class), having a global Singleton object will not endure the same problems that traditional global variables bring to the table.

This is opposed to the “Strategy” method, which involves creating new classes upon new classes for each new behavior found within a project. This way, we can create an interface for this behavior, and have it implemented by client classes that use it (this will save on memory from unused, inherited methods). Yes, the Strategy method is a step-up from mere method overriding. However, its need to constantly create new classes takes up massive mountains of memory in its own right (which can lead to poor performance).

Both ideas (the Singleton and the Strategy) are discussed within the articles linked below. I personally picked these articles due to the fact that they were my “help reference” when working on some of the homework assignments. Refactoring Guru is a great website that summarizes different refactoring strategies, as well as providing pseudocode and applications of each strategy in the real world.

I hope to use this newfound knowledge to improve upon my refactoring skills; this in turn will cut down on code smells, and create more efficient solutions to programming problems. When considering the Singleton design, we can use global variables without running into code smells such as “needless complexity” (by bringing in an unnecessary aspect), “viscosity” (by making future updates more difficult to implement), or “fragility” (by errors caused from having the data modified anywhere within the program). As for the Strategy design, we can ensure that classes only contain methods that they’ll actually use; this cuts down on the “needless repetition” coding smell (by ensuring that inherited methods don’t go unused).

Links: https://refactoring.guru/design-patterns/strategy

https://refactoring.guru/design-patterns/singleton

From the blog CS@Worcester – mpekim.code by Mike Morley (mpekim) and used with permission of the author. All other rights reserved by the author.

Self-Directed Professional Development Post #5

For this week’s blog post, I’ve decided to continue watching the video I started in my last professional development post, “Object Oriented Design Tutorial: Creating a UML Design from Scratch” by Derek Banas. The reason I picked this video is because 1. it directly relates to one of our course topics, “Modeling: Unified Modeling Language (UML)” and 2. because it teaches a methodical process to creating UML diagrams. Again, since I will likely be making more UML diagrams in my educational/professional life, I want to further develop this skill. In my last post, I learned how to create a Use Case Description, and in this post, I will discuss how to use a Use Case Description to create an Object Model and a Sequence Diagram.

Derek starts his Object Model by using the actors he listed in his Case Description to create 3 separate object boxes in his Object Model diagram. He connects the actors/boxes with a line to show what action is carried between the given object and the multiplicity that each box has. Next, Derek fills in each object he has created with pertinent data fields, for example, since there is one object box for two players, he adds name as an important piece of information to differentiate the two players. This process was valuable to me because I tried to predict what information would be important before Derek did. I wanted to include some methods in the object model, but I learned this is not where I do this. Derek only includes attributes of the objects and other objects that a specific class will use (e.g. the CoinGame includes both the players and the coin actors).

The next thing Derek does is create a Sequence Diagram. It is important to mention that in this video, Derek mentions that he has another introductory video into Sequence Diagrams so he goes somewhat quickly through this process. Either way, there was a lot that happened and that I learned from observing this process. Overall, the main takeaway from this part of the tutorial is that Derek uses the Steps of Execution from the Case Description to lay out the logic of the program he is creating. Throughout this process I thought it was interesting that there were some things that Derek, himself, was not sure of as far as what made the most logical sense, but he went forward with what worked for him. This was really insightful because as I practice this process on my own, I’m realizing that I can create a similar program with a slightly different logical order of what makes the most sense to me.

My goal is to continue following Derek along in this process so I can use a similar approach to create a simple program on my own. For my next post, I’m considering following Derek as he creates a UML he describes to be “easy” now that he has his Sequence Diagram created.

Tutorial link: https://www.youtube.com/watch?v=fJW65Wo7IHI&list=PLGLfVvz_LVvS5P7khyR4xDp7T9lCk9PgE&index=3

From the blog Sensinci's Blog by Sensinci's Blog and used with permission of the author. All other rights reserved by the author.

CS-343 Post #2

After working with docker in the past couple activities, I wanted to learn more about docker, particularly how docker uses images and containers after working with them in activity 6. I have a good idea on how they work and some differences between them, but I wanted to look more into images and containers to have a better idea and clear up some questions I had from the activity. The post that I read that talked about these two elements relating to docker and was helpful to me was “Docker Image vs Container: What is the difference?” from phoenixNAP.

Docker images are the executable files that have the source data for the containers to run, and the containers run by the images. Containers are environments in docker where users can interact with applications. Containers need images to run, and without a container to run, images aren’t very useful. Images cannot be edited because they are almost always in read-only mode, and can act as a preview to the container.

What I did not know about images was how an image can have multiple layers, with one of those layers being the base for the container. Unlike containers, they can not be stopped or run.

I knew that containers were the main places where you can interact with the program and incorporates standardization, but I did not know for sure that the visualization was taking place at the app level of containers rather than the hardware level. Containers are also autonomous and are very secure due to their isolation.

From activity 6, one of the main things I saw that separates the two is that containers have images as a part of them and when a container is created, both the container and image are created. But when the container is removed, the image stayed while only the container was removed. The image cannot be removed on its own because it is being used by the container, stopping the container will not allow this either.

One quote from the post that I think best sums up the relationship between the two is “When discussing the difference between images and containers, it isn’t fair to contrast them as opposing entities. Both elements are closely related and are part of a system defined by the Docker platform.” There are things that separate them, but images and containers aren’t meant to be compared and contrasted, but to be used together to efficiently run the programs and data given from the Dockerfile.

https://phoenixnap.com/kb/docker-image-vs-container

From the blog Jeffery Neal's Blog by jneal44 and used with permission of the author. All other rights reserved by the author.

Blog Discovery

Why do we use Docker?

I decided to choose this article because it goes in-depth on the way we use docker in real life and how efficient it can be when it comes to “virtualization” in a sense, which allows it to run smoother. Plus, BMC blogs are filled with potential reads about software development, implementation, methods, etc. filled with blogs about other project they have accomplished to supply on  how we are already learning the basics on docker and command lines, this information on how docker works will be used in my class for future projects will give not just give me insights but other students that have a need to gather an understanding on how to operate docker and its components and valuable resources.

What is Docker exactly?

Docker is an open-sourced platform that is Linux based towards virtualization but using containers to build, run, test and packages applications with pure efficiency without losing its integrity because it doesn’t rely on the computer’s hardware but on the OS (Docker Engine). Docker is broken down into different elements that helps it run a bit smoother than virtualization, which are containers, images, registries, docker file, and Docker engine. Docker image are set instructions that make up the containers and processes on how to run the application. To run the images, we have the docker engine that helps maintain the virtualization said container on the host machine. If docker is installed it can be applied the same when it comes to container since they run the same. For example, when we want to build a website and add in a web server along with a database such as MySQL, we can simply make an image for the MySQL and give it instruction to the specific port you wish. then package the web server in a container to run the images you already preconfigured and its dependencies, so if you need more servers to be added, easily deployable and easy to migrate said containers to new server. 

I have no prior experience with docker but have a great understanding on how it can be useful in software development and how it can used efficiently especially when compared to cost and using VMs. I could have used this practice in combine with my database class of when I built a MySQL sever and could’ve practice my database with a docker container and saw first handedly how to implement and observe the inner workings of composing it into a container to save the hassle if I wanted to build a web server or application entirely on the docker services. 

Blog resource link: https://www.bmc.com/blogs/docker-101-introduction/

From the blog cs@worcester – Dahwal Dev by Dahwal Charles and used with permission of the author. All other rights reserved by the author.

DevOps With Docker

This week, as we have begun to use Docker and explore how to use it further, I thought it would be a good time to look further into what Docker is and why it is used professionally. In doing so, I found a relatively short blog post by Sudip Sengupta called Introduction To Docker: A Beginner’s Guide that I think does a pretty good job of explaining the positives of using Docker as a development tool.

The post begins by covering why a lot of companies are switching to a containerized framework for development. Mostly, they explain, it is due to the ease of use. It allows for reduced complexity and vulnerability, and generally makes the development process more resilient to the introduction of bugs that are introduced by developers using different dependencies, or using different versions of different software dependencies. If developers are using Docker, they have a consistent container that is completely independent of what they have installed on their own system. So there is no variation in how a build will go, and no bugs can be introduced in the build process, making both building and testing more stable. The post also contains a brief but helpful explanation of how Docker actually functions. A customized docker image can be used to tailor instances of the container to use what is needed for development, and allow for a more modular work environment, as everything needed is stored in the image.

I chose this post because it felt like a pretty good introduction to what Docker is, how it works, and why it is being used more and more in professional software development. From my own experience using Docker so far, it seems like an extremely useful tool. There is no longer a need to have Java installed on my system just for software development, I don’t have to worry as much about what versions I have installed, or have to worry about having multiple versions that can introduce issues into my development. There just seem to be so many perks to using containerization, especially as part of a development team. After the initial setup of getting Docker to work, all the dependencies of your code are just stored in an image that can be used by everyone on the dev team. There is no longer a need to worry about somebody having an out-of-date version of something that can break the code, or cause inconsistent testing results. I will definitely continue to use Docker in the future, it just seems like an invaluable tool for any kind of software development, either personal or professional. And the amount of development tools that are made for Docker or can interact with Docker makes it even more useful.


Source: https://www.bmc.com/blogs/docker-101-introduction/

From the blog CS@Worcester – Kurt Maiser's Coding Blog by kmaiser and used with permission of the author. All other rights reserved by the author.

 GRASP

General Responsibility Assignment Software Patterns – is composed of various instructions which make the definition of classes but also of objects in the design that are oriented by different objects. GRASP has 9 different principles and models, each of which presents as a start the problems and solutions they have:

Information expert – What is the basic principle from which the responsibility for objects is determined?

                                    -We have to assign a class responsibility that has

                                     necessary information in order to fulfill it

In this example, the customer class carries references to all the messages that customers have. In this way, taking responsibility from the candidate to do the calculation of the total value of the various orders, comes gradually naturally. For this reason, this is one of the main principles, but also for the fact that if we do not have all the data we need, at that time we would not be able to meet the requirements and determine the responsibility.

Creator Who creates object A?

                We need to define class B in order to create object A (or e

                in other words, B contains the data of A, records them, uses them closely, and with

                most importantly I have the initializing data that A) has.

This model tries to help in order to be able to decide which class should be responsible for creating a new example of a certain class. In principle, creating an object as a whole is one of the most important processes, and it is important to have a principle for deciding who should have the opportunity to create a possible class example.

Controller – Who is the first object to be used beyond the UI layer which receives and

                      coordinates the control that a system operation performs?

                    – The responsibility of an object which represents: “system” e

                     generally represents the “root object” as well as a usage scenario from

                     where the operation of the system is possible.

The high-level design of our system creates dependence on the implementation of this principle. However, we always have to define the object in order to process the business transactions that we have.

Low Merger – How do we enable the impact of change? How to make it possible to

                             do we support low dependency but also increased reuse?

                          -To determine the responsibility that belongs to us so that the union (Which in this case is

                            Unnecessary, to remain the same Low.

The relation is basically about the mass, how one element relates to another. In this way, the higher the union is, the greater the dependence that one element has on another.

High cohesion – The way we keep objects focused, manageable,

                                 understandable, but also as a side role to support the low link?

                                -We have to assign a certain responsibility, for the sole reason that cohesion

                                 to remain high again.

The definition of cohesion is that it has the role of a measure of how tightly all the responsibilities that the element has can be related. In general, classes that have low cohesion, have within the data that are unrelated, or that have unrelated behaviors.

Indirect – The place from where responsibility should be assigned to avoid merging in order

                   direct between two or even more things.

                 – A responsibility must be assigned to an object that is intermediate, from where

                  mediation of services or other components is done, for the sole reason that they

                  not have a direct connection.

At this moment, the position of the mediator model enters the game. Another role that Indirection has is that it supports low fusion, but at the same time reduces all readability and reasoning to be used for the whole system as a whole.

Polymorphism – How are alternatives that are based on their type treated?

                         -At the moment when the alternatives or even the behaviors which are

                          interrelated    change

                       based on the type of class, determine the responsibility we should have for

                       behavior towards the species from where the behavior also changes, using in this way

                       also polymorphic operations.

Polymorphism is a term that indicates the basic design principle that is object-oriented. This principle has very strong links with what is otherwise called the Strategy Model.

Clean fabrication – Which object should have more responsibility, when we need it to violate the

                                  High   Cohesion but also the Low League. But in addition to these, the solutions

                                  which are offered by other principles and which are not considered appropriate?

                                -We need to define a group that is responsible for a certain class

                                  artificial or even a suitable class that can not do the representation of

                                  a concept that has the domain of the problem.

In some cases, it is very difficult to understand the moment where we have to place the responsibility. This is the main reason why there is a Domain Service concept in Domain-driven Design. The logic behind these services is not directly related to the entities that are separate.

Protected variations – How to design subsystems, objects, and systems in

                                              these elements, not to be influenced by other elements?

                                             -How to identify points that have variations or

                                              even the instability that is predicted, must

                                              define the responsibility we need to create an interface that is with

                                              stable.

This principle is the most important and is indirectly related to the other principles that GRASP has. But we must always be ready for the demands that are constantly changing, as programmers that we are.

Refereces:

https://titanwolf.org/Network/Articles/Article?AID=c96c4845-28c5-46c8-ae48-

https://yamm.finance/wiki/GRASP_(Object_Oriented_Design).html

From the blog CS@worcester – Xhulja's Blogs by xmurati and used with permission of the author. All other rights reserved by the author.

What is Docker and why are we using it?

For the past few weeks in class, we have been working with something called Docker. I have been working on projects that used docker, and we recently did an activity on Docker commands. With all this work revolving around Docker, I wanted to familiarize myself with it further. I did some research on what Docker is, how it works, and why we use it. There are an abundance of sources and blogs that go in depth to how Docker works. That being said, this blog post will just relay most of that information, and you may find it useful if you have been confused about docker up until now.

Let’s first understand what Docker is. A very informative source that I found was an article by IBM that explains this topic very well. Docker is an open source platform that utilizes containerization to package applications, their dependencies and required operating systems into containers. This in turn allows software developers like us to write code and build applications no matter the environment. Though it took a bit to get set up, I found that it made the whole process of writing programs more convenient.

For our second homework assignment, to get the project running in Visual Studio Code, we needed to reopen the folder in a dev container. Docker revolves around the process of containerization, a variation of virtualization. When you hear the term virtualization, you may think of virtual machines, which is the process of emulating a physical machine, virtualizing the OS, underlying hardware, the application and their dependencies. Containers on the other hand virtualize the OS and only the application and their dependencies. As a result, containers offer more portability because “unlike a virtual machine, containers do not need to include a guest OS in every instance and can, instead, simply leverage the features and resources of the host OS” as stated in another article by IBM.

Now that we have a better understanding of how containers differ from virtual machines. I just wanted to conclude by listing the benefits of using Docker and containers. IBM mentions that containers are more lightweight. I have definitely noticed the difference in system usage between running a virtual machine and just running Docker. Another benefit I have seen is the increase in development efficiency, especially for the second homework assignment, where we were required to run the code against tests several times as we made changes. Overall, I found that writing this blog post helped me get a better understanding of what Docker is, how containers work and their benefits to the software development process. It allowed me to weigh the pros and cons of using virtual machines as opposed to containers. And now I can understand why we are using Docker.

Sources:

https://www.ibm.com/cloud/learn/docker

https://www.ibm.com/cloud/blog/containers-vs-vms

From the blog CS@Worcester – Null Pointer by vrotimmy and used with permission of the author. All other rights reserved by the author.

Strategy Pattern Design Article

https://refactoring.guru/design-patterns/strategy

This article shows the “intent” of the strategy design pattern and how to successfully use it when refactoring code which is our current class topic. According to the article the strategy pattern (a behavioral design pattern in CS helps define a group of algorithms in separate classes which will effectively use each other’s objects.

One of the larger issues addressed by using the strategy pattern helps keep the main class from growing into a more complex mess. The strategy pattern allows for a developer to take/extract a class that has different functions and funnel them into a new class which is what this pattern refers to as a strategy.

Interfaces are commonly used with the strategy pattern in order to communicate with the other “strategies” that you previously extracted from the main class. The use of interfaces in the strategy pattern also allows your code to switch between algorithms at run time by using sub objects that perform their own tasks.

I chose this article as I feel it gives readers like myself a good general understanding of how to use and implement the strategy pattern. The article effectively shows you when you should or should not use the strategy pattern along with the pros and cons of using it. Similarities between the strategy pattern and other patterns it also outlined in the article which helps you tell the patterns apart and when you should use one pattern over another.

This article from refactoring guru has helped me to better understand the strategy pattern as a whole and helped me gain a somewhat smaller but still important understanding of some aspects of other design patterns (for example command and state). There are also examples of the strategy pattern being used in different coding languages found under the “Relations with Other Patterns” section.  I plan to use the information in this article to help me understand the Strategy pattern more in current and future assignments as well as in my professional career whenever I may need to refactor code through the different design patterns.

Overall I believe the general understanding of the strategy pattern gained from this article can help myself and any other student that may be having trouble with the topic or even someone who is just curious and would like to learn more about design patterns as their are articles on the other design patterns on the same website that can be accessed easily through links near the end of the page.

From the blog CS@Worcester – Dylan Brown Computer Science by dylanbrowncs and used with permission of the author. All other rights reserved by the author.

Software Construction Log #2 – Learning about Containerization and Virtualization

            In my experience, the concept of virtualization is currently synonymous with the creation of virtual machines that are used to emulate hardware and operating systems that, for one reason or another, are not readily available during the process of software development. For example, during my studies I have needed to use programs that were not available on Windows operating systems or to study an operating system for which a physical unit may not be readily available. Whatever the case may be, virtualization is not a new concept, and it is widely utilized in software development. It is important to note that virtualization is not exclusive to virtual machines, as it has a broader range that includes any concept related to the abstraction of a system’s physical components into virtual components.

            Among others, one concept of virtualization is containerization, with which most of us are familiar through Docker. Conversely, containerization refers to the containment of applications, their dependencies, and their required operating system into a singular package, also called container (hence, the name) that can be deployed and used on any operating system. By design, containers are meant to be a portable and lightweight way of testing and deploying applications, at least when compared to virtual machines. However, it is important to note that singular instances of containers cannot be modified, whereas it is possible to customize and modify virtual machines. Despite the caveats or benefits of virtual machines and containers, both are equally important during development.

          As I mentioned before, I have used virtual machines during my studies to create and use servers that I had no immediate physical access to, so the concept of virtual machines is not entirely foreign to me. However, I have little experience by comparison when it comes to Docker and using containers for software development, so I believe it is important for me to understand their differences so that I can know how to properly utilize them. As I was researching more regarding the concepts of virtualization and containerization, I came across the post titled What’s the Diff: VMs vs Containers on BackBlaze.Com, in which Roderick Bauer defines what virtual machines and containers are in detail, how they are different based on their structure on a server, as well as listing their benefits and best uses. Though Bauer does not directly state their caveats, by looking at the differences of both virtualization and containerization I can better understand when either approach could be more suitable depending on the needs during development.

            Moreover, what this post also helped me understand better is that neither option is mutually exclusive; it is possible (and sometimes, even preferable) to utilize both virtualization and containerization during development, rather than being limited to either option, so long as doing so does contribute to improving development.

Direct link to the resource referenced in the post: https://www.backblaze.com/blog/vm-vs-containers/

Recommended materials/resources reviewed related to virtualization, virtual machines, and Docker/containerization:
1) https://www.oracle.com/cloud-native/container-registry/what-is-docker/
2) https://www.infoworld.com/article/3204171/what-is-docker-the-spark-for-the-container-revolution.html
3) https://www.docker.com/resources/what-container
4) https://devopscon.io/blog/docker/docker-vs-virtual-machine-where-are-the-differences/
5) https://www.airpair.com/docker/posts/8-proven-real-world-ways-to-use-docker
6) https://opensource.com/resources/virtualization
7) https://en.wikipedia.org/wiki/Virtualization (Definition of Virtualization)
8) https://www.ibm.com/cloud/learn/containerization

From the blog CS@Worcester – CompSci Log by sohoda and used with permission of the author. All other rights reserved by the author.