Category Archives: Week-2

YAGNI

What is YAGNI?

YAGNI – You aren’t gonna need it is an Extreme Programming (XP) key practice that’s states: “Always implement things when you actually need them, never when you just foresee that you need them.”As it stated then programmer should not add functionality until it is proven to be absolutely necessary

YAGNI Principle always recommends programmers to build the easiest solution to today’s problems. Even if you are sure you will need the feature, moreover, do not run it now. Mostly, it’ll alter in either:

  1. You won’t need it after all, or
  2. What you really need is different from what you originally thought.

Why YAGNI Principle is mostly relevant in Software Development Lifecycle than others?

In Software Development when software developers start implementing features they always go into practices known as FEATURE CREEP. Software developers always try to add or improve features regardless of whether they are not requested by users. For example, developers may try to add authentication to the form without first completing CRUD in the database.

Alternatively, following YAGNI in the configuration life cycle enables developers to implement features that will only be relevant for the first duplicate or demo and other future features that will work as a new version or update of software developed as recommended by users.

YAGNI Principle is safe, if …

The YAGNI principle, alone, may not be secure. Software developers can set up an early algorithm on the system and then copy-paste it all over, so when it goes wrong they spend a lot of effort fixing it. With this process in coding, YAGNI is very dangerous and may not be the best idea.

But when the developers set the quality of the design, the quality of the code, and the quality of the test, YAGNI is completely safe, as every future risk is excluded by simple design rules, and it will be easy to find and fix if it goes wrong.

Benefits of YAGNI:

  • With a simple answer you can say the biggest benefit of the YAGNI Principle is to avoid unnecessary development.

Main reasons to practice YAGNI

1. It saves time as you avoid writing code that may not make sense later.

2. Your current code is better because you avoid guessing which can be bad

For two main reasons for practice, they ensure:

• That deadlines are available quickly,

• Customer satisfaction due to visible and frequent changes according to their needs or details,

• The best quality of code ordered by the fact that developers focus on small tasks (those that are currently very important),

• The code can be extended more and more (allows for updated versions),

• Reduced number of adjustments required,

• That the error is easy to fix.

REFERENCE: – 

https://deviq.com/principles/yagni

From the blog CS@Worcester – THE SOLID by isaacstephencs and used with permission of the author. All other rights reserved by the author.

But what really is API?

API, absolutely not the IPA’s beers (unless there really is the API beer) that Petr Gazarov, a software developer with more than 3600 followers on medium, mentioned in his blog, is an acronym of Application Programming Interface. Hmm, you might have heard about it but what it really is?

When we type the URL, for example, facebook.com, we send the request to Facebook’s remote server, which is a remotely located computer, and that computer, which is a server, processes the request through an API to display the webpage. From that instance, Gazarov defined that “An API isn’t the same as the remote server–rather it is the part of the server that receives, requires and sends responses.”

One example is the Object–Oriented Design, code is organized into objects, which will later be used to interact with one another. From my own experience, the first time I was aware of using the API was in the Data Structure class, the Oracle API provides specifications for the JVM Platform, Standard Edition, and the instructions on how to use those methods are documented. I was working with Stack in Java and I want to look at the object at the top of this stack without removing it, I will use peek() instead of pop() as instructed from the API documentation to request relevant operations. The Stack is the object designed by Oracle and the peek() function is a public method in that object, and a set of public methods and properties is an API. Web Server API is similar, the idea is to receive requests and responses.

The difference is the format of the request and response, in my case, it was simply just a function with a small operation, in other cases, different APIs provide flexible responses with tons of uses. One thing I learned from his blog is that to render the whole web page, the browser expects a response in HTML containing presentational code, Google Calendar’s API call would return the JSON formatted data.

Another experience that I’ve got when I was working with my friend in a web project is creating the backend of this project, which is developing the GET, POST, PUT, DELETE functions to respond to his request for his frontend application. The role of this API is to return the data, formatted in JSON, which is stored in Firebase database to the person working in the frontend in order to design the graphical user interface.

In short, Gazarov concluded” any piece of software that can be distinctly separated from its environment, can be an “A” in API, and will probably also have some sort of API”. Since this blog consolidates my understanding about API, I hope you will also find it useful.

From the blog CS@Worcester – Vien's Blog by Vien Hua and used with permission of the author. All other rights reserved by the author.

SOLID

Solid principles are divided into five different parts of programming and a design that addresses objects. Robert Cecil Martin or also known as “Uncle Bob”, was the one who introduced this type of anchoring or principle also called SOLID. This is one of the principles, where each letter represents a method which is represented by three letters, and which basically has a principle. When we work with software, which has a poorly managed management mode, we are dealing with a code that can become rigid, a fragile mode, and this makes this software difficult to use. When the code is rigid we are dealing with a code that is difficult to modify, it is difficult to change the way it currently works, or even more difficult to add any other features.

·  Single Responsibility Principle (SRP)

·  Open-Closed Principle (OCP)

·  Liskov Substitution Principle (LSP)

·  Interface Segregation Principle (ISP)

·  Dependency Inversion Principle (DIP)

Single Responsibility Prinnciple

The Principle of Sole Responsibility (SRP) at its core for a class not to change, there should be no more than one reason. And this results in every different class having to work with one method. However, each member in the class is related has a direct connection to what is called the primary function of the class. Unlike the example where a class has multiple responsibilities within it, in this method the possibility that it needs to be changed increases even more. And from all of this, we have an increased risk of presenting flaws whenever a class starts to modify. But by focusing only on one responsibility, this risk is even more limited.

Open-Closed Principle (OCP)

The Open / Closed Principle (OCP) means that classes for extension should generally be open, but for modification they should be closed. “Open for extension” aims to open the creation of our classrooms in order to add new functionality, while new requirements are generated. “Closed for modification” means not modifying a class after we have developed it, as it should never be modified, except to correct defects. These two parts of the principle seem to be contradictory to each other. However, if we structure our classes and their dependencies well, we can add functionality without having to reduce the existing source code.

Liskov Substitution Principle (LSP)

The Liskov Substitution Principle (LSP) can be formed in a variety of ways. LSP is applicable to inheritance hierarchies. He emphasizes that we need to create as many of our classes as possible, so that client dependencies are replaced with subclasses without the client being aware of this change. For this reason, all subclasses are based on functioning in the same way as the basic classes they have. Subclasses do not have to apply only the methods and properties that the class bases have so that they are the true type of behavior. But they must also be consistent with her behavior. And this requires compliance with some basic rules.

Interface Segregation Principle (ISP)

There are some classes that have public interfaces and that are not cohesive. The principle of interface sharing (ISP) explains that clients are not tasked with dependent on interfaces that are not usable by them.

– The first rule is the existence of contradictions between the parameters of the methods that the base class has and that of the compliance in the subclasses.

– The other rule is for preconditions and unconditional. The other LSP considers invariant.

-The other rule is to limit history.

– The last rule of LSP states that a subclass should not throw exceptions that are not thrown from the other base class.

Dependency Inversion Principle (DIP)

(DIP) indicates that high-level modules should not be dependent on low-level modules. Both should be dependent on abstraction. These abstractions should not depend on the details, but these should depend on the abstractions. The idea is that high and low-level modules have, categorizes classes hierarchically. High-level classes or modules are those that deal with much larger groups of their functionality.

References

https://www.jrebel.com/blog/solid-principles-in-java

https://www.geeksforgeeks.org/solid-principle-in-programming-understand-with-real-life-examples/

https://www.educative.io/edpresso/what-are-the-solid-principles-in-java

From the blog CS@worcester – Xhulja's Blogs by xmurati and used with permission of the author. All other rights reserved by the author.

Docker and It’s Components

I chose this article because it explains in detail the components of docker and how they each are applied in different phases of software development. I believe this will help unfamiliar students learn more about the components of docker beyond that of a general understanding. Given that we are already using docker in this course, I imagine we will be using it for future projects as well. A deeper understanding of this platform would be highly beneficial.

The docker service is made up of several components. A container is like a stripped-down virtual machine where programs can be run in isolation. A Docker image is what defines what will make up a container and contains the directions for how and what will run. For example, a MySQL Docker image would contain the instructions for running an instance of MySQL server and exposing the necessary ports. What runs the docker image is called the Docker Engine and is responsible for virtualizing the container on the host machine. The last major component is docker-compose, which allows one to configure storage, network, and interaction between Docker image containers. The purpose of docker is to allow pre-configured containers to run exactly the same wherever docker is installed. Because all containers are identical to the image they are created on, multiple containers can be run to quickly scale to meet performance demands.

I have used docker in the past on multiple occasions, the last being for my database design final project building a full-stack web application. To see how docker is used in a full-stack application, you can review the source code here gitlab.com/mjared94/todo-app inside the server directory. In the app, we used docker to streamline team development and simplify the backend services in use. The application required a database and a way to manage the database during development. Rather than installing the software individually on each of our computers, we used docker to quickly launch the services identically on each machine. This can be done using docker-compose, which allows you to outline how you want your containers to interact with each other as mentioned above.

Our web server was run directly from the terminal relying on the docker services. In production, we could have also containerized our web server to run in conjunction with the other containers. Doing so would have enabled anyone with docker to run our todo application, along with the required backend services, without any prior configuration. Containerizing our application would also make deployment seamless because our pre-configured container could be launched on virtually any web host. After reading this article and understanding the docker components in more detail, I will be better able to use them in the future. 


Blog resource link: https://www.infoworld.com/article/3204171/what-is-docker-the-spark-for-the-container-revolution.html

From the blog CS@Worcester – Jared's Development Blog by Jared Moore and used with permission of the author. All other rights reserved by the author.

From Umass Lowell to Unified Modeling Language: A History of My Experience With the Term “UML”

Believe it or not, I have been coding since the year 2015. As the title implies, the first university that I attended (right out of high school) was the University of Massachusetts – Lowell. Interestingly enough, within my very first textbook at UML, I would be exposed to another kind of “UML” – Unified Modeling Language. At this time, Unified Modeling Language was little more than a reference at the back of the book; there was more written on the subject, but my mind was concentrating on concepts such as how to “calloc()” in C.

Fast forward 5 years, and now Unified Modeling Language takes on a larger role; it is being used for visual representations of classes and inheritance in Java code. This can be seen with the following YouTube video, “How To Make Easy UML Sequence Diagrams and Flow Charts with PlantUML” by user “Be A Better Dev”. Essentially, the video shows how Java code can be written and then turned into a UML chart for a visual representation of classes and their features.

Personally, I selected this particular video due to the fact that I enjoy using YouTube more than any other social media; this way, I can use the app for educational purposes as well as recreational ones. This nine minute video is a great way to learn about how to code for UML in a format that is digestible on a busy schedule (when at work, for example). In addition, I expect the material to be applicable to aspects of the course (such as homework and exams) due to it being another form of practice. Practice, practice and yet more practice is the most important way to retain any type of coding/programming knowledge, and UML is no different.

It’s crazy to think that something barely glossed over from my educational journey five years ago would be so prevalent in the present day. However, it makes sense; programming practices such as using an arrow (->) operator or parentheses for a method are given new meanings when working with UML. Extending this further, programming syntax can create additional effects within the PlantUML environment. For example, placing an arrow on one side or another of an entity will effect exactly where it extends from on said entity.

As a final note, I am glad that I am able to work with this newfound technology. Ever since my days at UML, I have been wondering about when my code would leave the IDE environment and tackle more “lively” features (such as a graphical interface). Thanks to Unified Modeling Language, I now have a method of making my code come to life.

Article Link: https://www.youtube.com/watch?v=xObBUVDMbQs

From the blog CS@Worcester – mpekim.code by Mike Morley (mpekim) and used with permission of the author. All other rights reserved by the author.

Docker Explained.

This week in class we’ve gone over UML Diagrams and the importance of being able to translate back and forth between writing code from the diagram and making a diagram based on the code. The professor told us to download Visual Studio and Docker, which I’m assuming will be used for the entirety of the semester. I didn’t have a single clue as to what Docker was or why it may have been needed. After a brief explanation prof told me to do a little bit of reading myself and so I did. I’m by no means not a Docker expert but the picture has become a bit clearer.

Docker is a container based application that allows you to run services independent of each other. Containers tend to be pretty compact and only carry the information neccesary for a service to work. Docker containers are created through docker images. An image is basically just a template that tells the system how to make the container. An image can consist of many layers, of which each layer is just a previous working version of the image. It’s important to note that an image is read-only. The purpose of the image is to load the container. The top most layer (when the container is created) is what the user works with, whether it’s making changes to the container itself or using the tools that come with the container. When reading about how this technology works the thought of how something like this could be secure kept on swimming through my mind but as each layer of the image is created it becomes a completely new and immutable image. I’m still not entirely sure how this works and will have to spend more time trying to understand, but for now I’ll just take it for what its worth.

Where Docker really becomes a useful tools is in its portablilty and reusabilty. For example, the use of a virtual machine to run certain programs or applications isn’t frowned upon, but it does tend to be costly in terms of using space and memory. A 500MB application could take heaps of memory to run because the guest OS and libraries would need to run before being able to use a desired application. If you wanted to run multiple instances of that application you would need to run multiple VMs. That’s where Docker delivers and gives the user what they need in terms of reusability.

Now Docker containers are not a one stop shop when it comes to solving issues. If a user is trying to use multiple servers and tries to adminstrate them only using Docker containers, they will find themselves in a pinch due to the stripped down capabilties of a container. A container only holds enough information for what actions are necessary to ensure task completion in terms of portabilty. In a scenario like this you would probably want to stick with using a VM to get the full use of the OS and all it’s resources to maintain multiple servers.

Here’s two videos that brought me up to speed on just what type of software Docker is and why it is extremely useful in just over 15 minutes. The explanations are given in a low level manner that allows people like me who couldn’t even begin to understand the concept grasp it better. I hope you enjoy the content, I did!

Containers vs VMs https://www.youtube.com/watch?v=cjXI-yxqGTI
Containerization Explained https://www.youtube.com/watch?v=0qotVMX-J5s

From the blog CS@Worcester – You have reached the upper bound by cloudtech360 and used with permission of the author. All other rights reserved by the author.

Self-Directed Professional Development Post #2

The article I’ve decided to read for this blog entry is titled, “Getting Started – An overview of Markdown, how it works, and what you can do with it.” The reason I picked this article is because it is connected to our first homework assignment. Our first homework assignment has us working with UML class diagrams and when I clicked on the web IDE on GitLab, I was brought to a file that was written in Markdown language where I would have to add my Java code and create PlantUML class diagrams. I’m not too familiar with Markdown language and since I would be interacting with it for my first homework assignment, I figured I’d do some research to learn more about it. 

The article I read starts out by defining Markdown language, “Markdown is a lightweight markup language that you can use to add formatting elements to plaintext text documents.” The article then goes into explaining why people use Markdown. I’ve learned that Markdown can be used for essentially anything: from websites, documents, notes, books, presentations, email messages, to technical documentation. I also learned that Markdown is platform independent and that the content that is created on it doesn’t get locked into a proprietary file format like Microsoft Word. 

Next, the article discusses how Markdown works. This process can be generalized into four steps:

  1. Create a Markdown file.

  2. Open the file in an Markdown application.

  3. Use the Markdown application to convert the file to an HTML document.

  4. Render the HTML document to a web browser (or another document).

During this part of the article, we also learn more about Dillinger, a Markdown editor that combines these steps. It was useful for me to learn the name of this editor because if I were to ever use Markdown for myself, I now know a common and popular editor to do so.

Lastly, the article’s main ending point is that there are many different “flavors of Markdown” and that using Markdown with one editor may provide a very different experience than using Markdown with a different editor. Many of the basic syntax may be the same but there are extended syntax elements that likely differ.

One of the biggest takeaways for me is that after I finished reading this article, I looked into other kinds of markup languages and learned how they are different from programming languages and scripting languages. Unlike programming and scripting languages, markup languages are presentation languages that do not do logical operations. It’s safe to say that, now that I’ve learned about Markdown, I feel more confident moving forward with my homework assignment (even if what I learned was going to be only a minor piece of my overall assignment).

Article: https://www.markdownguide.org/getting-started/

From the blog Sensinci's Blog by Sensinci's Blog and used with permission of the author. All other rights reserved by the author.

Understanding UML Diagram Relationships

Recently in my Software Construction, Design and Architecture class, we have been getting into UML class diagrams. The class diagrams provide an understandable system structure and the relationships among the objects. Each relationship has a distinct visual representation based on arrows:

In this case, we started using Visual Studio Code and PlantUML to model class diagrams. I referred to the UML documentation that was provided to get a better understanding of how to display each type of arrow in PlantUML.

As shown above, there are three kinds of relationships that can be represented in PlantUML: Extension, Composition, and Aggregation. I am not totally familiar with the use of composition and aggregation as we’ve only been focusing on models that use extends and implements in class. I wanted to get a better understanding on how to read UML class diagrams and to be able to tell the relationships between the objects on the diagram.

With PlantUML and Markdown preview in Visual Studio Code, you can see changes to the UML diagram in real time as you modify the code. What you write in the code will reflect what shows on the diagram. You can also manually draw the relationship, which provides more customization such as arrow length, direction, etc. In addition to solid lines, you can use dotted lines to represent dependencies.

Extends is represented on a UML diagram as a solid line with an empty arrow. By writing that Class A extends Class B, you will see the diagram update show a line with an empty arrow point from Class A to Class B. Instead, you can also write Class A <|– Class B in order to display it on the diagram. When viewing the diagram alone, you can assume that Class A has access to all attributes and operations available in Class B.

Implements is represented on a UML diagram as a variation of the extension arrow, with a dotted line instead. By writing that Class A implements Interface, the diagram will be updated to show a dotted line with an empty arrow pointing from Class A to Interface. To draw an inheritance arrow on the diagram, you can write Class A <|.. Interface. When taking a look at the diagram, you can assume that Class A has access to the operations of Interface and contains the code for them.

I found the PlantUML documentation very useful as it contains basically everything you need to know about using PlantUML to create class diagrams. I also really enjoyed the functionality to modify the example diagrams to experiment with the features that you just read up on. Overall, I think that UML class diagrams are a good way to visualize a class structure. The arrows are easy to follow and help in understanding the relationship between classes.

From the blog CS@Worcester – Null Pointer by vrotimmy and used with permission of the author. All other rights reserved by the author.

What have I learned about the term “Association”?

For in-class activities of this week, we learned about modeling with UML class diagram. There were many important elements to compose a class, such as properties, operations, and association. However, my team got stuck at the “association” section, we could not clearly distinguish the different types of relationships between two classes, especially the relationship named have-a. Therefore, I thought it was a good opportunity for myself to learn more about the term “association”.

There are many articles and blogs analyzing about different types of relationships between classes. However, many of them look very complicated with a long list of different relationship types and some are very confusing. Fortunately, I found this article, entitled Association, Composition and Aggregation in Java, which finally satisfied my curiosity about “association”, because the content of this article is focused only into the definition of “association” and the analysis of its types which are aggregation and composition.

There are some key points that I have taken away after reading the article. First, association represents a relationship between two classes, depending on each situation, association can be one-to-one, one-to-many, many-to-one, and many-to-many. Second, association has two special forms, which are aggregation and composition.

For aggregation, this term is also known as has-a relationship (a weak association), which is one way relationship and each entity of the relationship can exist independently. The author also provides a simple example to describe those characteristics of aggregation, which is an Institute has-a Department, and Department has-a Student. This is one way relationship because a department can have students, but vice versa is impossible. Moreover, if the Department class is removed, the Student class can still exist independently.

For composition, this is the part-of relationship (a strong association), and the two entities are dependent to each other, which means a class (child) cannot exist without the existing of another class (parent). For this definition, the author gives an example of Book and Library, Book is part-of Library. So, if Library class is removed, Book class cannot exist. In my opinion, this example does not make sense to the part-of relationship because I think Book still can exist without Library. Therefore, I have looked for other resources to find a good example for composition. UML Association vs Aggregation vs Composition is a good resource to provide real-life examples for each conception. Like the first article, this one also focuses on the analysis of the term “association” and its two special types, aggregation and composition. According to the article, I got an example that Head, Hand, Leg are part-of Person. Thus, if Person class is removed, Head, Hand and Leg classes cannot exist. For myself, this is a perfect example to describe the term composition.

In short, I have learned a lot of information and had a clear understanding about the term “association” from the two articles recommended above. I will apply this knowledge to design relationships between classes in my UML diagrams. Moreover, I believe the articles that I choose are good resources because their content is clearly organized with good examples. In other words, I would say the two articles complement each other, so it is good if readers can pull out the best parts from each article and be able to combine them consistently.

From the blog CS@Worcester – T&#039;s CSblog by tyahhhh and used with permission of the author. All other rights reserved by the author.

Docker

As my journey to find my very first internship as a software developer. I’ve noticed that the majority of the posts require Docker’s experience. From there, I realized that Docker is an essential tool for Software Developers and their professional careers. This blog is about Docker of what it is and also why it is necessary nowadays.

Docker is a container running time. A container is a standard unit of software that packs up codes and all their dependencies so the applicant might run quickly and reliably from one computing environment to another. Docker container is a lightweight, standalone, executable package of software that includes everything needed to run an application: code, runtime, system tools, system libraries, and settings

The good thing about Docker is it helps developers get their application to work on every machine. Also, the abundant app’s libraries and dependencies ready to be executed make Docker even better compares to its same category competitive. Besides that, Docker is lightfast and also very easy to maintain.

Also, running applications in a container brings many benefits to both developers such as:

  • Portability: Once developers run their containerized application on their machines they also would be able to deploy it to other operations and be assured that their applications would perform the same as on their own.
  • Performance: VMs are alternative methods for developers but Docker offers much more compared to regular VMs as faster to deploy, quicker to start, and smaller footprint than ordinary VMs
  • Agility: Containers offer portability and performance help reducing time-consuming and make the process responsive and agile. Such advantages provide a better way to deliver the right software at the right time.
  • Isolation: A Docker container that contains one of the applications also includes the relevant versions of any supporting software that the developer’s application requires. If other Docker containers have different versions of the same supporting software, that is not a problem because Docker containers are independent.

Most uses of Docker make developer life simply better while developing applications. But it does not mean that Docker could entirely replace the actual Virtual Machine. VMs are still much needed if we have to have a whole operating system for each customer or the entire sandbox. VMs are still being used as middle layers when you have a big server framework and many customers that using them. Despite many good things that Docker could bring to developers, VMs still has a firm grip within the industry and development cycle.

From the blog CS@Worcester – Nin by hpnguyen27 and used with permission of the author. All other rights reserved by the author.