Category Archives: CS-343

law of demeter

During the course of this course (Software Construction, Design, and Architecture), there have been design concepts that are very easy to grasp at first glance, and those that take much more time to digest. I was surprised to see that the Law of Demeter, or Principle of Least Knowledge, is a fairly intuitive rule, but feels almost too restrictive in nature.

Essentially, the Law of Demeter is the concept that methods in an object should not communicate with any element that isn’t ‘adjacent’ to it. According to a blog post by JavaDevGuy (and thinking of a Java application to the rule), the elements that are allowed by the law are the object itself (this), objects in the argument of the method, instance variables of the object, objects created by the method, global variables, and methods that the method calls.

This is most easily explained by a negative example. For example, if a class Car has a method with a Dashboard object as an argument, it can technically call something like dashboard.getVersion(). But if a class Garage method has a Car argument, the method should not call something like car.getDashboard().getVersion(). Maybe this is a silly example, but this applies to more practical code as well.

JavaDevGuy goes further to say that most Getter methods violate this law. This interpretation seems restrictive, as it makes it much more difficult to just get work done (of course, I’m not the most experienced in the design aspect of software engineering so I could be wrong). It seems more practical to use the law to get rid of chaining in your code, as it causes needless complexity. Chaining methods together, regardless of how necessary it is, always ends up looking kind of ugly. I do feel like it is a necessary evil sometimes though.

As it stands, I can understand that this sort of practice can minimize that amount of complexity and reduce code repetition, but it does feel like sometimes you sort of need to put things together in this way to get the desired output. The aforementioned blog post seems to explain when code is violating the law, but unless my eyes and brain are too tired to read properly, the author doesn’t really give any good replacement options for the code. The few alternatives given don’t seem very desirable. This is typically the problem with negative rules, it imposes a restriction without a solution, and so you have to scramble to figure out how to work it.

Perhaps I’ll understand better when we cover this material in class.

From the blog CS@Worcester – V's CompSCi Blog by V and used with permission of the author. All other rights reserved by the author.

CS343 – Week 7

SOLID is an acronym that stands for the 5 different principles that help write high-quality, maintainable, and scalable code. These include Single Responsibility Principle (SRP), Open-Closed Principle (OCP), Liskov Substitution Principle (LSP), Interface Segregation Principle (ISP) and Dependency Inversion Principle (DIP). Each provide their own benefits and work in tandem together when designing a program.

SRP states that a class should only have one responsibility, meaning only one reason to change. This helps prevent a class from having too many responsibilities that can affect each other when one is changed. Following SRP ensures that the code will be easier to comprehend and prone to fewer errors. However, it is harder than it sounds to fulfill this principle. The quickest solution to adding a new method or functionality would be to add it to existing code, but this could lead to trouble down the road when trying to maintain the code.

OCP states that software classes, modules, functions, etc. should be open for extension but closed for modification. This is essential because it allows entities to be extended without modification so that developers can add new functionality without risking the chance of breaking the code. Adding an additional level of abstraction with the use of interfaces help design the program to provide loose coupling.

LSP states that any instance of a derived class should be substitutable for an instance of its base class without affecting the program in a harmful way. The importance of this principle revolves around the ability to ensure the behavior of the program remains consistent and predictable. Unfortunately, there is no easy way to enforce this principle, so the user must add their own test cases for the objects of each subclass to ensure that the code does not significantly change the functionality.

ISP focuses on designing interfaces that are specific to a user’s needs. Instead of creating a large interface that covers all methods, it is more beneficial to split up the methods across smaller, more focused interfaces that are less coupled. For example, having too many methods in an interface can sometimes cause issues in the code, so separating the methods into individual interfaces that can be implemented by a certain class.

DIP states that high-level modules should not depend on lower-level modules but should depend on abstractions. This approach aims to reduce coupling between modules, increase modularity, and make the code easier to maintain, test, and extend. An important thing to note is that both high-level and low-level modules depend on abstractions. Dependency Inversion utilizes the SOLID principles which in turn leads to a more refined and maintainable code.

From the blog CS@Worcester – Jason Lee Computer Science Blog by jlee3811 and used with permission of the author. All other rights reserved by the author.

Encapsulating What Varies

For this week’s blog post, I chose to discuss the design principle of encapsulating what varies, as described in the article “Encapsulate What Varies (EWV)- Design Principle” by Pankaj. This article discusses the rationale behind encapsulating what varies, its two aspects, its benefits, the strategies of implementing encapsulating what varies, and gives an example of what encapsulating what varies looks like. This article falls in line with the design principle section of the syllabus and its sub-section encapsulating what varies. I also enjoyed the section discussing the benefits of encapsulating what varies. In this blog post, I will review a few of the benefits of encapsulating what varies are mentioned in the article.

The first benefit mentioned in the article that I will be discussing is flexibility. “Flexibility: By encapsulating the parts of the system that are subject to change, we make it easier to modify or replace those parts without affecting the rest of the system. This makes the system more flexible and easier to adapt to changing requirements.” As stated in the article, when encapsulating what varies, your resulting project becomes significantly more straightforward to maintain and update in the future. Instead of having to adjust one extensive method or class, possibly causing conflicts with other parts of the project, if you isolate the factors that are likely to change in the future, you prevent possible issues with updates causing problems with unrelated systems.

The next benefit that I will be discussing that comes with encapsulating what varies is reusability. “Reusability: By creating abstractions that represent the parts of the system that are subject to change, we make it possible to reuse those abstractions in different contexts. This can save time and effort when developing new features or applications.” Being able to reuse aspects of your code in other areas is very useful in reducing time developing and bug testing. Instead of making more methods or classes that could conflict with preexisting ones, you are reusing methods or classes that you already know will not conflict with other areas in your project.

Finally, I will discuss one of the greatest benefits that come with encapsulating what varies: maintainability. “Maintainability: By isolating the impact of changes to a specific part of the system, we make it easier to maintain the system over time. This reduces the risk of introducing unintended side effects or breaking existing functionality.” As I have also mentioned earlier in this blog post, isolating frequently changing parts of the project makes it much easier to diagnose bugs or other issues that may come up as the project is developed or as it is updated as time goes on.

Article: https://www.neatcode.org/encapsulate-what-varies/

From the blog CS@Worcester – P. McManus Worcester State CS Blog by patrickmcmanus1 and used with permission of the author. All other rights reserved by the author.

The Magic Behind Software Frameworks: Understanding Their Role & Importance (Week-7)

In the vast world of software development, there’s a term that’s often tossed around, especially when discussing the foundation of any application or website: Software Frameworks. For those on the periphery of tech, this term might sound intimidating or complex. However, software frameworks are essentially tools that simplify our digital lives, both as developers and users. Let’s delve into what they are, why they matter, and the magic they bring to the table.

What Are Software Frameworks?

At its core, a software framework is a platform or foundation that provides a structured way to build and deploy software. Think of it like a skeletal structure upon which developers can flesh out their applications. It provides a set of guidelines, tools, libraries, and best practices that developers can follow and utilize to streamline the development process.

Why Do Software Frameworks Matter?

  1. Efficiency & Speed: Instead of starting from scratch, developers can leverage pre-written chunks of code, leading to faster development cycles.
  2. Consistency: Frameworks provide standardized practices, ensuring that applications are built consistently and are maintainable in the long run.
  3. Scalability: As a business grows, its software needs might evolve. Frameworks often come with built-in tools and conventions that make scaling up (or down) more manageable.
  4. Security: Many popular frameworks undergo rigorous testing and have built-in security measures to help protect against common vulnerabilities.

Popular Software Frameworks:

  1. Web Development:
  • Django: A high-level Python web framework that encourages rapid development and a clean, pragmatic design.
  • React: A JavaScript library for building user interfaces, particularly single-page applications.
  1. Mobile Development:
  • Flutter: A UI toolkit for crafting natively compiled applications for mobile, web, and desktop from a single codebase.
  • React Native: Enables developers to build native mobile apps using JavaScript and React.
  1. Backend Development:
  • Express.js: A fast, unopinionated, minimalist web framework for Node.js.
  • Ruby on Rails: A server-side web application framework written in Ruby.

In Conclusion:

Software frameworks are the unsung heroes of the tech world, working behind the scenes to ensure that the applications and websites we rely on daily are robust, secure, and efficient. By providing a structured foundation, they enable developers to focus on what truly matters: creating innovative solutions and improving user experiences. Whether you’re an aspiring developer or just a tech enthusiast, understanding the role of software frameworks can offer a deeper appreciation for the digital tools that power our connected world.

From the blog CS@Worcester – Kadriu's Blog by Arber Kadriu and used with permission of the author. All other rights reserved by the author.

CS343 Blog Post for Week of October 22

This week, I wanted to continue from my previous entry describing the Single Responsibility Principle. The next concept named in the SOLID design philosophies is the Open-Closed Principle. I first heard the name of this design philosophy in one of my computer science courses. I couldn’t figure out what exactly it meant just going intuitively from the name, however, so I returned to the Stackify website from my previous blog post to continue reading about the SOLID principles.

The Open-Closed Principle was first coined by Bertrand Meyer in his 1988 book “Object-Oriented Software Construction”. Robert C. Martin would later adopt the Open-Closed Principle, calling it “the most important principle of object-oriented design”. He would explain the Open-Closed Principle as “Software entities (classes, modules, functions, etc.) should be open for extension, but closed for modification.” In practice, this means programming such that new functionalities may be added without changing the existing code.

Bertrand Meyer originally proposed inheritance as a means of abiding by the Open-Closed design principle, but this introduces the problem of tight coupling between software classes. If one class is too dependent on a parent class to function, it will be difficult to make changes to one class without modifying the other class to maintain functionality. Instead, Robert C. Martin suggests using interfaces rather than concrete superclasses, redefining this design philosophy as the Polymorphic Open/Closed principle. The abstraction provided by interfaces helps avoid the problem of tight coupling between software classes.

The article goes on to provide a practical example of the application of the Open/Closed principle through creating a software class representing a coffee machine, and an accompanying class representing a simple app that controls the coffee machine. The article presents the situation where the user may have a different kind of coffee machine that they would like to control with the same software app. The process of refactoring the BasicCoffeeMachine code to implement a broader CoffeeMachine interface is detailed, as well as the process of refactoring the CoffeeApp to utilize the new CoffeeMachine interface. This way, the functionality of our software was greatly expanded, without having to remove large portions of our previously written code.

I chose to research more about the Open/Closed Principle this week because I don’t find myself using interfaces in my projects as often as I should. I could save space and make my code more efficient if I took care to design my software classes to an interface. The example in the article seems like it is employing the Strategy method when instantiating the CoffeeMachine objects. Understanding both the Strategy method and the Open/Closed principle will help me to make better use of interfaces when designing software classes in the future.

From the blog CS@Worcester – Michael's Programming Blog by mikesprogrammingblog and used with permission of the author. All other rights reserved by the author.

POST #1

Hi, my name is Abdullah Farouk and this is going to be my first blog post for CS343 class. We are learning about Rest API and I was out this week of class so I thought it would be a great idea to read and learn more about it on my own. I am using a post from free code camp that I found online to reference this, I will post it down bellow for you all to give a read. For those of you who don’t know much about Rest API, I will explain must of what you need to know about it in this post. The word REST stands for representational state transfer and API stands for Application programming interface and if you don’t know what that means, it basically find a connection between programs so they can transfer data. Rest is a software architecture that sets constraints and conditions to how the API is supposed to be used. This lets us interact with the data that is stored on the webservers. Companies love to use REST API for a lot of reasons like it’s effectiveness and how it makes client-server interactions better. The REST API make it easy for us to communicate with the servers by giving us HTTP request methods to use. request methods including GET, POST, PATCH, and DELETE which we saw in our classwork assignments that the professor wrote. GET lets us get the information and read the data. POST is used to create a data, like creating a new client. PATCH lets us update the data that is on the server while DELETE obviously deletes the data. The post that I read, that I will have the link to down below, gives us an example on how to actually use these methods. It is really helpful and I suggest everyone to give a read.

reference article: https://www.freecodecamp.org/news/what-is-rest-rest-api-definition-for-beginners/

From the blog CS@Worcester – Farouk's blog by afarouk1 and used with permission of the author. All other rights reserved by the author.

Week 7: CS-343

Object Oriented Programming Principles

In object oriented programming there are four basic principles: Encapsulation, Abstraction, Inheritance, and Polymorphism. These principles are fundamental such that they are referred to as the four pillars of object oriented programming.

Encapsulation

Encapsulation is hiding or protecting data about a class. This can be achieved by restricting access to public methods. Variables within a class are kept private, while accessor methods are kept public in order to access the private variables.

Encapsulating data helps prevents unauthorized modifications of data by only allowing access to the data using the defined accessor methods. For example, when adding variables to a class rather than accessing/modifying them directly, one would create “getter” and “setter” methods that would still be able to access the data. These methods would allow users the same functionality, but without the risk of undesired changes.

Abstraction

Abstraction is showing only relevant data of classes. Abstraction enables working with high level mechanisms of a class rather than the specific details of implementation, thus reducing complexity.

Picture a system storing different types of vehicles. Rather than creating different concrete classes for each type of vehicle, abstraction can be applied to create one class, ‘Vehicle’, that has the frameworks of basic behaviors and attributes that all vehicles have. These could include methods and attributes such as ‘start()’ and ‘stop()’, and ‘make’ and ‘model’. Then classes for each type of vehicle can be made to extend the ‘Vehicle’ class. Classes that extend ‘Vehicle’ can add specific implementations to the methods and attributes, depending on the vehicle.

Inheritance

Inheritance can be defined as having a “is-a”/”has-a” relationship between a parent class and it’s child classes. The child class derives all methods and attributes from the parent class, enabling reuse of code, but also allowing the addition of unique attributes and methods.

Imagine a system for a college representing faculty and students. A parent class, ‘Person’, is created for common data among all people at the college such as ‘name’ and ’email’. Child classes of ‘Person’ can be created such as ‘Faculty’ and ‘Student’. Both child classes would inherit ‘name’ and ’email’ from the ‘Person’ class, while unique information can be added to each of the child classes. Unique attributes could include ‘gpa’ for ‘Student’ and ‘salary’ for ‘Faculty’.

Polymorphism

Polymorphism can be simply put as reusing code with different types of objects, reducing code redundancy.

Using an interface called ‘Shape’ with method ‘calculateArea()’, different types of shapes can implement ‘calculateArea()’ and change how the specific shape uses the method. For example a square would calculate the area differently than a circle. However, both can still use ‘calculateArea()’ due to polymorphism.

Conclusion

As we learned earlier in the semester, many of us did not have a complete understanding of the four principles above, which is why I chose to learn more about them. After reading the blog, I now better understand the differences and why each of the principles are important. I will be implementing these principles into all future projects.

Resources:

View at Medium.com

From the blog CS@Worcester – Zack's CS Blog by ztram1 and used with permission of the author. All other rights reserved by the author.

YAGNI

YAGNI is an acronym for You Ain’t Gonna Need It. It’s a principle from extreme Programming that says that programmers should only add functionality once it is definitely necessary. When coding if you are sure that you will need a piece of code or a feature later on, you don’t need to implement it now. Maybe you wouldn’t even need or add it because you might need something else. This is why you don’t want developers to waste their time creating extra elements that might not end up being necessary and can slow the process. YANGI helps save time and avoid spending time on features that might not be used, the main features of the program are developed better, and less time is spent on each release. When you have a problem that you can’t solve, you won’t be capable of making the best choices when coming up with a solution. On the other hand, when you know what is causing the problem, you can come up with a better plan to solve it. In software development, you can think about creating a system that can deal with everything but would only use a few features and could need attention and upgrades.

YAGNI can be implemented by development teams from small to large, so it isn’t limited to only small projects or large enterprises. This principle can help set up a task list of do’s and don’ts. Always try implementing the selling feature and get the app ready for end users. After the app is functional you can start adding extra features in the next version. Waiting to add any additional features will save a lot of time and effort for developers to help them meet project deadlines. Once your app is live you should keep up with its updates and be able to make the app better. By delaying the app’s updates to add more features can give opponents a chance to take your users. The first version of the app doesn’t need to be perfect, if it can just do the simple things and still fulfill its intended purpose then that is enough. With time you can add in all the add-ons you need later on instead of just cramming it into one version. The you ain’t gonna need it principle is very time effective and efficient for developers so that we could get our projects done on time, not adding anything that isn’t necessary at the time, and make sure that developers don’t feel stress by making sure all of the add on features needed to be added. This principle is time, stress and cost efficient for developers, which is why this principle should be used constantly.

https://www.techtarget.com/whatis/definition/You-arent-gonna-need-it

From the blog CS@Worcester – Kaylene Noel's Blog by Kaylene Noel and used with permission of the author. All other rights reserved by the author.

Navigating Software Complexity: The Principle of Least Knowledge (LoD)

In the labyrinthine world of software design, where complexity can quickly become a daunting challenge, a guiding principle emerges as a shining beacon of simplicity and maintainability. This principle, known as the Principle of Least Knowledge or the Law of Demeter (LoD), serves as a valuable compass for software architects and developers. Its primary mission: to minimize dependencies between objects and foster a more modular and maintainable codebase.

At its essence, the Principle of Least Knowledge encourages developers to reduce the interactions between different parts of a software system. This reduction of dependencies simplifies the code and makes it more robust. The LoD suggests that an object should only have limited knowledge of other objects, specifically, objects that are directly related to it. In simpler terms, it advocates for keeping the communication between objects focused and minimal.

By adhering to the Principle of Least Knowledge, developers can significantly enhance the maintainability of a software system. When objects have limited knowledge about other objects, changes in one part of the system have minimal impact on other parts. This isolation of knowledge not only reduces the risk of unintended consequences but also facilitates easier testing and debugging.

As software systems grow in complexity, the LoD remains a steadfast ally. It ensures that each component knows only what is essential for its immediate tasks, fostering a more modular and less error-prone codebase. With the Principle of Least Knowledge as their guiding star, software developers continue to navigate the intricacies of software design, simplifying the path to robust and maintainable systems.

References:

From the blog CS-343 – Hieu Tran Blog by Trung Hiếu and used with permission of the author. All other rights reserved by the author.

Semantic Versioning

Have you gone to open an application on your computer and were given a prompt saying your software is out of date?  Some of these messages even tell you what version you are running compared to the latest available.

For example, I opened Notepad++ and received the message above.  You can see I currently have version 8.4.7.0 installed, and the available version 8.5.7.

This is an example of Semantic Versioning.  The article “A Guide to Semantic Versioning” by Suemayah Eldursi (https://www.baeldung.com/cs/semantic-versioning)  clearly breaks down each segment of a Semantic Version. Semantic Versioning is a scheme for labeling versions of software using meaningful numbers to represent what was changed.

The first digit refers to the major updates.

  • Updates that involve changes that break API functionality and are not backwards compatible.  

In my example, you can see the current major update matches the available.

The second digit indicates minor updates.  

  • Updates include new features that are backwards compatible and will not break anything in the API.  

In my example, you can see the current version is one minor update behind the available version.

The third digit describes the number of patches. 

  • Small bug fixes and changes that don’t add any new features, are backwards compatible, and don’t cause any breaks in the API.  

In my example, the current patch number is irrelevant because of the minor update, however, we can see that there have been 7 patches since the minor update.

Semantic Versions may also include a pre-release label and build number.  Pre-release labels, such as alpha and beta, and build numbers would look something like “1.0.0-alpha.4”  These are used to let the user know that it is a pre-release build and may have more use for developers to identify additional version information as needed.

Being able to read a version number is important for users so that they can know what changes have been made to the software or if it’s backwards compatible.  Therefore, updating the version number correctly is just as important.  The article “Introduction to Semantic Versioning“ by Parikshit Hooda (https://www.geeksforgeeks.org/introduction-semantic-versioning/) provides a great illustration that demonstrates what type of update you should choose and what the version number would look like after:

In this example, you can see how a bug fix would only change the last digit from an 8 to 9.  It’s important to note that a minor or major update resets all following numbers to 0.

Maintaining proper documentation of your updates through Semantic Versioning is vital for both the user and developer. As a developer, keeping an organized record of each update and communicating what changes were made to users.

From the blog CS@Worcester – CS Learning by kbourassa18 and used with permission of the author. All other rights reserved by the author.