Category Archives: Week 7

What is Refactoring?

Refactoring is the process of restructuring a code without changing or adding to its functionality and external behavior. There are a lot of ways to go about refactoring but it mostly goes towards applying standard basic actions. These changes in the existing code save the software’s functionality and behavior since the changes are so tiny, they are less likely to cause any errors. So, what is the point of refactoring? Well, refactoring is to turn any messy confusing code into a clean understandable one instead. When a code is messy it means that the code is hard to understand and maintain, when it’s time to add a required functionality it causes a lot of problems with the code because it’s confusing already. With a clean code, it’s easier to make any changes and improve on any problems. Also with a clean code anybody who ever works with the code is able to understand the code and can appreciate how organized it is. When a messy code isn’t cleaned up it can affect any feature developments because developers have to take more time to understand the code and track the code so that they can make any changes themselves.

Knowing when to refactor is important and there are different times to refactor your code. Like refactoring, while you’re reviewing the code, reviewing the code before it goes live is the best time to refactor and make any changes you can before pushing it through. You can also schedule certain parts of your day to refactor your code instead of doing it all at once. By cleaning your code you are able to catch any bugs before they create any problems in your code. The main thing about refactoring your code is that cleaning up a dirty code can reduce technical debt. Clean code is easier to read and if anybody else besides the developer works on that code they are also able to easily understand that code as well as maintain and add features to it. The less complicated the code is, it can lead to improved source-code maintenance. With a clean code, the design can be used as a tool for any other developers it can become the basis of a code elsewhere. This is why I believe that refactoring is important because changing just the smallest piece of code can lead to a better functional approach to programming. It helps developers get a better understanding of the code as well as making better design decisions.

From the blog CS@Worcester – Kaylene Noel's Blog by Kaylene Noel and used with permission of the author. All other rights reserved by the author.

rest apis

REST stands for Representational State Transfer. This means that when a client requests a resource using a REST API, the server transfers back the current state of the resource in a standardized representation.

From the blog CS@Worcester – Andres Ovalles by ergonutt and used with permission of the author. All other rights reserved by the author.

law of demeter

During the course of this course (Software Construction, Design, and Architecture), there have been design concepts that are very easy to grasp at first glance, and those that take much more time to digest. I was surprised to see that the Law of Demeter, or Principle of Least Knowledge, is a fairly intuitive rule, but feels almost too restrictive in nature.

Essentially, the Law of Demeter is the concept that methods in an object should not communicate with any element that isn’t ‘adjacent’ to it. According to a blog post by JavaDevGuy (and thinking of a Java application to the rule), the elements that are allowed by the law are the object itself (this), objects in the argument of the method, instance variables of the object, objects created by the method, global variables, and methods that the method calls.

This is most easily explained by a negative example. For example, if a class Car has a method with a Dashboard object as an argument, it can technically call something like dashboard.getVersion(). But if a class Garage method has a Car argument, the method should not call something like car.getDashboard().getVersion(). Maybe this is a silly example, but this applies to more practical code as well.

JavaDevGuy goes further to say that most Getter methods violate this law. This interpretation seems restrictive, as it makes it much more difficult to just get work done (of course, I’m not the most experienced in the design aspect of software engineering so I could be wrong). It seems more practical to use the law to get rid of chaining in your code, as it causes needless complexity. Chaining methods together, regardless of how necessary it is, always ends up looking kind of ugly. I do feel like it is a necessary evil sometimes though.

As it stands, I can understand that this sort of practice can minimize that amount of complexity and reduce code repetition, but it does feel like sometimes you sort of need to put things together in this way to get the desired output. The aforementioned blog post seems to explain when code is violating the law, but unless my eyes and brain are too tired to read properly, the author doesn’t really give any good replacement options for the code. The few alternatives given don’t seem very desirable. This is typically the problem with negative rules, it imposes a restriction without a solution, and so you have to scramble to figure out how to work it.

Perhaps I’ll understand better when we cover this material in class.

From the blog CS@Worcester – V's CompSCi Blog by V and used with permission of the author. All other rights reserved by the author.

CS343 – Week 7

SOLID is an acronym that stands for the 5 different principles that help write high-quality, maintainable, and scalable code. These include Single Responsibility Principle (SRP), Open-Closed Principle (OCP), Liskov Substitution Principle (LSP), Interface Segregation Principle (ISP) and Dependency Inversion Principle (DIP). Each provide their own benefits and work in tandem together when designing a program.

SRP states that a class should only have one responsibility, meaning only one reason to change. This helps prevent a class from having too many responsibilities that can affect each other when one is changed. Following SRP ensures that the code will be easier to comprehend and prone to fewer errors. However, it is harder than it sounds to fulfill this principle. The quickest solution to adding a new method or functionality would be to add it to existing code, but this could lead to trouble down the road when trying to maintain the code.

OCP states that software classes, modules, functions, etc. should be open for extension but closed for modification. This is essential because it allows entities to be extended without modification so that developers can add new functionality without risking the chance of breaking the code. Adding an additional level of abstraction with the use of interfaces help design the program to provide loose coupling.

LSP states that any instance of a derived class should be substitutable for an instance of its base class without affecting the program in a harmful way. The importance of this principle revolves around the ability to ensure the behavior of the program remains consistent and predictable. Unfortunately, there is no easy way to enforce this principle, so the user must add their own test cases for the objects of each subclass to ensure that the code does not significantly change the functionality.

ISP focuses on designing interfaces that are specific to a user’s needs. Instead of creating a large interface that covers all methods, it is more beneficial to split up the methods across smaller, more focused interfaces that are less coupled. For example, having too many methods in an interface can sometimes cause issues in the code, so separating the methods into individual interfaces that can be implemented by a certain class.

DIP states that high-level modules should not depend on lower-level modules but should depend on abstractions. This approach aims to reduce coupling between modules, increase modularity, and make the code easier to maintain, test, and extend. An important thing to note is that both high-level and low-level modules depend on abstractions. Dependency Inversion utilizes the SOLID principles which in turn leads to a more refined and maintainable code.

From the blog CS@Worcester – Jason Lee Computer Science Blog by jlee3811 and used with permission of the author. All other rights reserved by the author.

Encapsulating What Varies

For this week’s blog post, I chose to discuss the design principle of encapsulating what varies, as described in the article “Encapsulate What Varies (EWV)- Design Principle” by Pankaj. This article discusses the rationale behind encapsulating what varies, its two aspects, its benefits, the strategies of implementing encapsulating what varies, and gives an example of what encapsulating what varies looks like. This article falls in line with the design principle section of the syllabus and its sub-section encapsulating what varies. I also enjoyed the section discussing the benefits of encapsulating what varies. In this blog post, I will review a few of the benefits of encapsulating what varies are mentioned in the article.

The first benefit mentioned in the article that I will be discussing is flexibility. “Flexibility: By encapsulating the parts of the system that are subject to change, we make it easier to modify or replace those parts without affecting the rest of the system. This makes the system more flexible and easier to adapt to changing requirements.” As stated in the article, when encapsulating what varies, your resulting project becomes significantly more straightforward to maintain and update in the future. Instead of having to adjust one extensive method or class, possibly causing conflicts with other parts of the project, if you isolate the factors that are likely to change in the future, you prevent possible issues with updates causing problems with unrelated systems.

The next benefit that I will be discussing that comes with encapsulating what varies is reusability. “Reusability: By creating abstractions that represent the parts of the system that are subject to change, we make it possible to reuse those abstractions in different contexts. This can save time and effort when developing new features or applications.” Being able to reuse aspects of your code in other areas is very useful in reducing time developing and bug testing. Instead of making more methods or classes that could conflict with preexisting ones, you are reusing methods or classes that you already know will not conflict with other areas in your project.

Finally, I will discuss one of the greatest benefits that come with encapsulating what varies: maintainability. “Maintainability: By isolating the impact of changes to a specific part of the system, we make it easier to maintain the system over time. This reduces the risk of introducing unintended side effects or breaking existing functionality.” As I have also mentioned earlier in this blog post, isolating frequently changing parts of the project makes it much easier to diagnose bugs or other issues that may come up as the project is developed or as it is updated as time goes on.

Article: https://www.neatcode.org/encapsulate-what-varies/

From the blog CS@Worcester – P. McManus Worcester State CS Blog by patrickmcmanus1 and used with permission of the author. All other rights reserved by the author.

Week 7: CS-348

Version Control

Version control is a software development process that tracks and manages every change made to a code base. Tracking changes allows developers to be able to see what changes were made, who made them, and when they were made. The history of these changes enables developers to revert changes back to previous versions in case of any irreparable damage.

Version control allows for multiple developers to work on a project concurrently. When multiple changes are made at once, conflicts can occur. Version control can identify those conflicts to allow development teams to quickly compare the changes and decide how to handle the conflict. Version control streamlines coordination, sharing, and collaboration.

Types of Version Control Systems

A version control system (VCS) is the system that tracks changes made to files. Common types of VCSs are distributed and centralized, the latter being most common.

Centralized VCS (CVCS)

In a centralized VCS, all files are stored in one central repository where developers work in. The central repository can be hosted on a server or on a local machine. CVCS is most commonly used in projects where teams need to share code and track changes.

Distributed VCS (DVCS)

Distributed VCS store files across multiple repositories, allowing developers access to files from multiple locations. DVCS is often used when developers need to work on projects from multiple machines or who collaborate with others remotely.

Lock-Based

Less commonly used, Lock-Based uses file locking to manage concurrent access to files and resources. File locking prevents more than one user to make changes to a file or resource at a time, eliminating conflict changes.

Optimistic

Optimistic VCS gives every developer their own private workspace. Once changes are made and are ready to be shared, a request is made to the server. Then the server looks at the changes and determines which can be safely merged together.

Popular Version Control Systems

Git

The most popular of version control systems. Git is an open-source distributed version control system that can be used with software projects of any size. This makes Git a popular choice for most, no matter the project.

Subversion (SVN)

Subversion is a centralized VCS; therefore, all project files are kept in one main repository. This makes branching impossible, allowing for easier scalability for large projects. A form of file locking is in place, allowing users to restrict access to subfolders.

Mercurial

Mercurial is another distributed version control system. Mercurial offers an intuitive command line interface that allows developers to use this system immediately.

Conclusion

I chose this resource because it clearly explains what version control is and why it’s important. Before reading this article, I was unaware of the different types of version control systems, and the popular choices that implement them such as Subversion. I also learned when each type of VCS might be more useful than another. This is only the beginning of my knowledge of version control systems. As my journey into software development continues, my understanding of VCSs will only broaden.

Resources:

https://about.gitlab.com/topics/version-control/

From the blog CS@Worcester – Zack's CS Blog by ztram1 and used with permission of the author. All other rights reserved by the author.

Agile & Scrum

The podcast episode, “Scrum vs Agile & Keys to Success with Mike Cohn,” discusses the ways to succeed using the Agile methodology and how to work with the Scrum Guide to create an efficient plan for your team. I selected this episode because I believe that one of the major aspects of creating an efficient team is the steps you take to complete your work. To do that, you must follow a methodology to create a plan to complete your goal. In class, we learned about Agile and the advantages it had over waterfall, so listening to a podcast about Agile was very intriguing. The episode emphasizes that you don’t have to adhere to the Scrum Guide as if it were a rule book, and that you have to work with your team to find out what aspects of the guide work best within the team to allow for maximum efficiency. This is a big mistake that many people make because they are scared of “breaking the rules.” However, this is something that one must be able to do if they want to elevate their work to the next level. From this, I thought the podcast was very interesting since we recently learned about these methods in class. In class, we discussed the steps of Agile and the benefits it has compared to the waterfall methodology that we learned about prior. In addition, we also learned about the Scrum Guide, where we were taught the aspects as well as the elements that the Scrum Guide highlights to help users understand Scrum. From our work, we were shown that you are allowed to deviate from the Scrum Guide; it is just a basic framework to help users put the steps in an effective order. After the episode, I believe that it gave valuable insights on how these methods work in real company settings, highlighting how you don’t have to follow the guide to a tee and you must find out what works best for your team to allow for the most efficient work. Plus, through the discussion of real-world examples, it promoted further thinking about what we learned in class as well as how to connect the ideas we were taught to work experiences. Once it comes time for me to apply this to my work, it will be helpful to understand that the Scrum Guide is not a rule book but a guide to take aspects from to assist in efficient teamwork.

From the blog CS@Worcester – Giovanni Casiano – Software Development by Giovanni Casiano and used with permission of the author. All other rights reserved by the author.

CS343 Blog Post for Week of October 22

This week, I wanted to continue from my previous entry describing the Single Responsibility Principle. The next concept named in the SOLID design philosophies is the Open-Closed Principle. I first heard the name of this design philosophy in one of my computer science courses. I couldn’t figure out what exactly it meant just going intuitively from the name, however, so I returned to the Stackify website from my previous blog post to continue reading about the SOLID principles.

The Open-Closed Principle was first coined by Bertrand Meyer in his 1988 book “Object-Oriented Software Construction”. Robert C. Martin would later adopt the Open-Closed Principle, calling it “the most important principle of object-oriented design”. He would explain the Open-Closed Principle as “Software entities (classes, modules, functions, etc.) should be open for extension, but closed for modification.” In practice, this means programming such that new functionalities may be added without changing the existing code.

Bertrand Meyer originally proposed inheritance as a means of abiding by the Open-Closed design principle, but this introduces the problem of tight coupling between software classes. If one class is too dependent on a parent class to function, it will be difficult to make changes to one class without modifying the other class to maintain functionality. Instead, Robert C. Martin suggests using interfaces rather than concrete superclasses, redefining this design philosophy as the Polymorphic Open/Closed principle. The abstraction provided by interfaces helps avoid the problem of tight coupling between software classes.

The article goes on to provide a practical example of the application of the Open/Closed principle through creating a software class representing a coffee machine, and an accompanying class representing a simple app that controls the coffee machine. The article presents the situation where the user may have a different kind of coffee machine that they would like to control with the same software app. The process of refactoring the BasicCoffeeMachine code to implement a broader CoffeeMachine interface is detailed, as well as the process of refactoring the CoffeeApp to utilize the new CoffeeMachine interface. This way, the functionality of our software was greatly expanded, without having to remove large portions of our previously written code.

I chose to research more about the Open/Closed Principle this week because I don’t find myself using interfaces in my projects as often as I should. I could save space and make my code more efficient if I took care to design my software classes to an interface. The example in the article seems like it is employing the Strategy method when instantiating the CoffeeMachine objects. Understanding both the Strategy method and the Open/Closed principle will help me to make better use of interfaces when designing software classes in the future.

From the blog CS@Worcester – Michael's Programming Blog by mikesprogrammingblog and used with permission of the author. All other rights reserved by the author.

Week 7: CS-343

Object Oriented Programming Principles

In object oriented programming there are four basic principles: Encapsulation, Abstraction, Inheritance, and Polymorphism. These principles are fundamental such that they are referred to as the four pillars of object oriented programming.

Encapsulation

Encapsulation is hiding or protecting data about a class. This can be achieved by restricting access to public methods. Variables within a class are kept private, while accessor methods are kept public in order to access the private variables.

Encapsulating data helps prevents unauthorized modifications of data by only allowing access to the data using the defined accessor methods. For example, when adding variables to a class rather than accessing/modifying them directly, one would create “getter” and “setter” methods that would still be able to access the data. These methods would allow users the same functionality, but without the risk of undesired changes.

Abstraction

Abstraction is showing only relevant data of classes. Abstraction enables working with high level mechanisms of a class rather than the specific details of implementation, thus reducing complexity.

Picture a system storing different types of vehicles. Rather than creating different concrete classes for each type of vehicle, abstraction can be applied to create one class, ‘Vehicle’, that has the frameworks of basic behaviors and attributes that all vehicles have. These could include methods and attributes such as ‘start()’ and ‘stop()’, and ‘make’ and ‘model’. Then classes for each type of vehicle can be made to extend the ‘Vehicle’ class. Classes that extend ‘Vehicle’ can add specific implementations to the methods and attributes, depending on the vehicle.

Inheritance

Inheritance can be defined as having a “is-a”/”has-a” relationship between a parent class and it’s child classes. The child class derives all methods and attributes from the parent class, enabling reuse of code, but also allowing the addition of unique attributes and methods.

Imagine a system for a college representing faculty and students. A parent class, ‘Person’, is created for common data among all people at the college such as ‘name’ and ’email’. Child classes of ‘Person’ can be created such as ‘Faculty’ and ‘Student’. Both child classes would inherit ‘name’ and ’email’ from the ‘Person’ class, while unique information can be added to each of the child classes. Unique attributes could include ‘gpa’ for ‘Student’ and ‘salary’ for ‘Faculty’.

Polymorphism

Polymorphism can be simply put as reusing code with different types of objects, reducing code redundancy.

Using an interface called ‘Shape’ with method ‘calculateArea()’, different types of shapes can implement ‘calculateArea()’ and change how the specific shape uses the method. For example a square would calculate the area differently than a circle. However, both can still use ‘calculateArea()’ due to polymorphism.

Conclusion

As we learned earlier in the semester, many of us did not have a complete understanding of the four principles above, which is why I chose to learn more about them. After reading the blog, I now better understand the differences and why each of the principles are important. I will be implementing these principles into all future projects.

Resources:

View at Medium.com

From the blog CS@Worcester – Zack's CS Blog by ztram1 and used with permission of the author. All other rights reserved by the author.

Software Licensing Compliance

The article I chose for this week’s blog post is “A PRIMER ON OPEN SOURCE LICENSE COMPLIANCE” by Dasha Gurova. This article discusses what Open Source Licensing is, what software licenses are, why you should implement an open source license compliance policy, some examples of semi-automated OSS compliance tools, and how to use OSS (open source software) compliance systems. I will be discussing the ways of implementing OSS compliance systems mentioned in the article, those being the manual and semi-automatic methods, and their strengths and weaknesses.

The methods the article mentions for maintaining OSS compliance are manual and semi-automatic. The manual process is a very time-intensive approach and an old-school approach. “A surprising number of companies are still using this approach. Basically, you create a spreadsheet and manually fill it out with components, versions, and licenses and analyze it against your policy.” The article also says that this method can be very effective on smaller projects if implemented early in the project’s development. This method would entail reviewing and logging the software’s license before implementing any components from the open-source software. However, if not implemented early, using this method can be very difficult to execute properly. This method also becomes much more challenging to maintain as your project uses more and more OSS components. Especially to make sure the licenses that you are using do not conflict with one another and are being adequately implemented. These issues can be made worse especially when this method is not being maintained properly, in addition to being used in a large project.

The other method mentioned in the article is the semi-automatic method. “This is a more reliable approach and is becoming more popular, as the importance of open source compliance practices grows along with the risks associated with ignoring these practices. There are many tools available, in a way that it gets overwhelming. Why semi-automated? Because there are always false positives if the license is not explicitly referenced in the header and you still have to read through some of them to discover special terms or conditions.” As mentioned in the article, this method is far more reliable but teams that use this method must be vigilant to see if the warnings that the tools give are false positives. If the team using this system just assumes all the warnings are completely accurate this could lead to a lot of headache trying to find other OSS that would comply even though you already had OSS that would work.

Article: https://www.zenko.io/blog/get-started-with-open-source-license-compliance/

From the blog CS@Worcester – P. McManus Worcester State CS Blog by patrickmcmanus1 and used with permission of the author. All other rights reserved by the author.