Category Archives: CS-343

What is the Abstract Factory Design Pattern?

Learning new design patterns can be interesting because, for the most part, it explores a new way of coding that I thought I would never have to use. This week I’ve been refactoring design patterns in my code, such as the Singleton pattern. The Singleton pattern is a design pattern that is supposed to restrict the instantiation of a class to one object. Basically, it’s used so that when that object is created then we don’t have to recreate that object. Recreating that object would wipe clear the information that you want that object to hold. That is where the Singleton Pattern shows up! But for now, I want to go over the Abstract Factory Design Pattern. The Abstract Factory Pattern is almost like the Factory Pattern but according to the blog “Abstract Factory Design Pattern in Java” published by Pankaj, the Abstract Factory Design is like a factory of factories.

If familiar with the factory design pattern, it uses a single Factory class, this factory class returns different subclasses based on the inputs provided and the factory classes use if-else or switch statements to find out the class it’s supposed to bring up. In the Abstract factory pattern, the if-else statement or switch statements are thrown out the window and instead, we have a factory class for each sub-class. And then we have an Abstract Factory class that will return the sub-class based on the input factory class. So the Abstract Factory uses multiple sub-classes of the factory class which is then implemented into a superclass which would be the abstract class.

The Abstract Factory will use a more interface and extension approach rather than implementation. In contrast to the factory method, all the subclasses are basically put into one class which would be the factory class, and that one factory class was the class to call if a subclass needed to be reached. Now we have all these classes that represent these subclasses that can be changed or coded to our liking. So it’s a lot more robust because we don’t have to be hammered down by conditional statements, but since we don’t really have that simplicity anymore, things do tend to be a lot more complicated. It’s a lot more complicated because of the many classes that can be involved in making a factory and plus the many specifics and ideas that have to go into those sub-factory classes I guess you could call it.

It would be really interesting to implement this in one of my projects. Right now I’m making modifications to a game called Minecraft, you might have heard of it, it’s a pretty popular game. It would be interesting to create an Abstract factory design for the many tools or blocks that I’ll be adding to the game. It might seem complicated but it could help me better organize the mod a lot better.

Link to Blog “Abstract Factory Design Pattern in Java” by Pankaj:

https://www.digitalocean.com/community/tutorials/abstract-factory-design-pattern-in-java

From the blog CS@Worcester – FindKelvin by Kelvin Nina and used with permission of the author. All other rights reserved by the author.

You ain’t going to need it? (YAGNI)

YAGNI! You ain’t going to need it!

Now, someone might wonder why that stuck out to me? Well, it’s because of I often tend to have the thoughts to ‘future proof’ things! And, this is in direct contrast with YAGNI. Not only do I try to future proof things in coding, but I tend to do in in a variety of different fields.

Building a new computer? Let’s shell out a few extra dollars to make sure I can upgrade my storage in the future.
Making a new base in Minecraft? I’ll clear out this space so I have room to expand!
Buying groceries? Let’s just pick up this ingredient in case I might need it!

When has that helped me? Well, actually pretty often! But… In the grand scheme of things, the ratio is probably quite small. If I had to put a number to it, I’d say… It’s probably…

1/10

Potentially even less! So why the heck am I still stuck with this idea of future proofing! No idea! But anyways, to get back on topic, YAGNI seemed useful in not just my coding, but everywhere, so I started to look more into it! In my search for a grander understanding, I discovered this post by Martin Fowler. And boy, oh boy, did Martin teach me a lot of things!

Firstly, it makes sense! The concept of future proofing, or preparing for something that you will eventually need seems sound, right? But, things change! Especially when it comes to specifications and needs. Things might seem set in stone one day, but in the next, something might have changed and the requirements become different! Any and all investment and work you put into that ‘feature’ will be wasted, so don’t think about it until you need it!

In my own cases, I can definitely say if I just had a more YAGNI approach, I’d save myself countless hours and at least a small pile of money! Plans are plans are a reason! Just because I think I’ll want it later, doesn’t mean I’ll actually want it when the time comes!

That computer I future proofed? Never did it!
That space I cleared out for an expansion in Minecraft? Never expanded it!
That one ingredient I picked up? Never used it!

These have just been my experiences, but I am certain that at least some people can understand where I’m coming from. It really does seem to make sense to prepare for the future, right? But if you’re really preparing for the future, you better make sure you absolutely are going to use it! This is certainly a lot easier for real life things, but when it comes to coding and meeting specifications, that is just so much harder! You probably have about the same chance of looking into a crystal ball and reading your own future as you do thinking a feature will be needed! So…

Don’t do it! You ain’t going to need it!

From the blog CS@Worcester – Bored Coding by iisbor and used with permission of the author. All other rights reserved by the author.

Getting better and better

As we have been Progressing through the weeks and we work on the pogils as well as the homework I find myself slowly getting the hang of the work flow as well as utilizing VSCode, I for once understand what is going on in the classes and am eager to continue to learn as it finally is being pieced together in my head. the biggest part would probably have to be the feedback that I receive and probably mostly everyone else gets as well its not some random jargon about how something is wrong but it seems like a guiding hand to get you to the solution that you aren’t to far from. the biggest take away I have gotten as organizing the UML diagrams on the Homework really connected the dots on how these classes and models work together better than it ever has , I can understand paths a little better than I did before.

I only hope I continue to understand the work I continue to do, and im eager to see what work I can get done.

From the blog cs@worcester – Marels Blog by mbeqo and used with permission of the author. All other rights reserved by the author.

YAGNI!

While looking through my blogs, I came across a familiar acronym that I used all the time when it comes to developing software and system. The acronym is called “YAGNI”, which stands for “You Ain’t Gonna Need It” according to the blog “Automation Principles – YAGNI/Premature Optimization, It’s the principle of extreme programming that states a programmer should not add functionality until deemed necessary. The blog takes about how many engineers will spend multiple hours trying to build the “right system” the first time. In some cases, trying to build a flawless system in the first go can be rather difficult to achieve. The problem is that programmers spend too much time worrying about efficiency in the wrong places and having that premature optimization can cause more harm than good. The blog goes over Big- O notation which explains that it does not care about constants but the long-term growth rate of functions. This is a good rule to consider because having to introduce something before a fraction of the code is even written can make a program a lot more difficult to support as explained, it would increase design considerations, the likelihood of race condition, and the ability to troubleshoot. Optimizing certain processes might not lead to any time savings or real optimization. In fact, it could do the exact opposite, a good example that the blog states are when using Python, constructing lambdas and list comprehensions over simple for loops. The blogger has mentioned that in his personal experience he would add non-functional requirements, such as authentication and logging, too early, adding features before needed. With that being said, I remember spending so much time on adding the ability to connect my bank to my finance application, that I didn’t have the time to code the application itself. The blogger talks about network automation which explains more about how networking is all about speed and not creating YAGNI isn’t really in the cards. They would go into detail about real-world examples when it comes to the network automation process, explaining issues about multithreading, in which he explains that overloading the TACACS server with too many requests at once is very problematic, or scaling wide too fast can cause processes to slow down and too much resource utilization, overall, it’s very inefficient. Configuration Generation takes too long and is very inefficient, and with all these in mind, the blogger isn’t trying to not consider tomorrow’s problems but is more in line with building things up as they go.

“Automation Principles – YAGNI / Premature Optimizations” :

https://blog.networktocode.com/post/Principle-YAGNI/

From the blog CS@Worcester – FindKelvin by Kelvin Nina and used with permission of the author. All other rights reserved by the author.

Semantic Versioning (There’s a standard?!?!)

Being the person that I am, I made my own rules without being influenced by anyone else. For me, I simply though that

Well… It’s my code, right? That means I can name things however I want, right? So if I said this is version 0.9, then it’s version 0.9! Or if this is version 2.0, then it’s 2.0!

In response, my friend informed me of a few soft rules, since he was more familiar with versioning through an internship opportunity than I was, and he shed some light on a few details. Like how versioning is not just a made up number that you assign to your project/code. Hearing this, I was honestly pretty confused, but I looked into it and discovered Semantic Versioning!

And wow, did that blow me away. Looking back at the way I approached things, I was no different from a headless chicken running around! Learning about Semantic Versioning, through this video, it is both so straightforward but also so useful. No longer will I have my versions be numbered willy-nilly, nor will I even forget my own versions and what happened in this update! It’ll finally make sense!

To break down Semantic Versioning, consider versions as three parts!

The Major version, so think 1.0.0
The Minor version, such as 0.1.0
And finally, patches, like so 0.0.1

Major versions are, well, major! These are versions that leap forward and have large changes! Making a major version usually implies that the code is changed in such a way that it will no longer work with older versions. These changes might be large architectural changes, or other large scale changes that usually break the existing structures or APIs.

Minor versions are NOT as massive (duh), but they come with plenty of other things compared to, say, patches! Minor versions could be added features, or updates to existing features, and most of the time, nothing should break. Functionality that existing before should still work, and further development of new features shouldn’t impact or break anything either (ideally).

Patches are like bandages! These are where the bug fixes are! If something is broken in an unintended way, that’s when patches come in, and they should be used to iron out problems that were not accounted for. It could be something minor like the output of some code is a 2 instead of a 1, or it could be a bit more major like a whole portion of the code crashes when run, but patches are for fixes!

As an example, I’ll run through a brief bit of versioning on a made up project! Let’s say… I’m making a program to print out a leaderboard. It connects to an online library to gather scores, and then displays it to whoever connects to it.

After long and grueling work, I did it! I finally completed it and so, I release…
Version 1.0.0
It’s done and completed, but then… It turns out, my leaderboard only prints out the WORST scores! Dang it, I flipped the list around by mistake! So off I go, fixing this bug. After everything is working again, my next version should be 1.0.1.
But then… I think to myself, wouldn’t it be cool if I added region support too? Then we could see where this score is from, and potentially even group regions by high scores!
So, after adding this new feature, my next version will be 1.1.0!
Pretty straightforward, right? It’s really clear what is going on too!
Let’s say another bug is introduce, bam, 1.1.1!
Then more features, 1.2.0
And finally another bug is squashed! 1.2.1
And… another bug ! 1.2.2
But… then wait, there’s another bug. 1.2.3
Things feel good and everything works, but I learn about a whole new library that does what I want it to do, and gives me more room for growth. So rework the whole program with this new library! The old stuff still works, but since I’m switching to a whole new system, this version will no longer work with the old one, and now, this is when I jump a major version.
We’re now on version 2.0.0!

I hope that example made sense! There is more to semantic versioning as well, but that is the quick break down. Introducing things like alpha/beta and also ^ and ~ adds some other intricacies, but for now… Let’s not think about them! Alphas/betas are optional for testing purposes, and to be honest, ^ and ~ are a bit out of my grasp. I don’t fully understand them yet! I’ll look into them more, and maybe I’ll have a future update post detailing what I’ve discovered!

From the blog CS@Worcester – Bored Coding by iisbor and used with permission of the author. All other rights reserved by the author.

UML Diagrams Are Amazing!

These past few weeks, I’ve been getting myself refamiliarized with UML Diagrams. These diagrams have made a frequent appearance in my CS career. From my complete understanding, they are a great way of analyzing one’s code from the top down. At first, I thought it was just another hassle. Some of these UML Diagrams can get rather difficult to understand, and with all terminology and ways to draw out these charts, it can get pretty hectic to understand how the code works. I learned to take in the information given one at a time. In the blog “Types of UML Diagrams” by Lucid Content Team, it explains that when it comes to any formal code training, UML diagrams are essential but take some time to build and become really out of date fairly quickly, in an Agile environment. But they are very useful for quick visual documentation so that employees can give stockholders a quick overview of the system so developers don’t waste time in meetings.

UML stands for Unified Modeling Language, which is a way to visually represent the architecture, design, and implementation of complex software systems. It is supposed to keep track of the relationships and hierarchies within a software system. It’s hard enough to keep track of thousands of lines of code and so the UML diagram is supposed to keep track of all these components of the software. UML diagrams can be used with basically any programming language and so all software developers should be able to understand it. UML diagrams keep things productive and focused, and they are very helpful to engineering teams. This can include bringing in new team members or developers up to speed, source code navigation, and planning out new features before programming them, and it helps communications between a non-technical audience more easily- which means that most people will be able to understand the process regardless of programming experience.

There are many types of UML diagrams. The first is structural UML diagrams, which show how the system is structured, with classes, objects, packages, and the relationships between them. The component diagram is a more specialized version of the class diagram, which breaks a complex system down into smaller components and will visualized the relationship between the components. Deployment diagrams show how software is deployed on hardware components in a system. Composite structure diagrams are essentially blueprints for the internal structure of a classifier. Object diagrams show examples of data structures at a specific time. And package diagrams are used to show dependencies between different packages. Obviously, this doesn’t even cover half of the UML diagram spectrums since we didn’t even Behavioral UML diagrams which are used to visualize how the system behaves and interacts with itself and other systems.

I’ve come to realize that UML diagrams can be very useful, it’s important to read code from the source but that can be rather time-consuming sometimes. UML diagram is a lot easier to take in and can explain how the software works in just minutes. In my future projects, I want to be able to utilize UML diagrams so that I can better explain my own work to others. I feel it would have been very easy to explain my past projects to people if I was able to have one. The blog was quite interesting because it explained the many types of UML diagrams that exist and their practical uses.

Link to “Types Of UML Diagrams” by Lucid Content Team: https://www.lucidchart.com/blog/types-of-UML-diagrams

From the blog CS@Worcester – FindKelvin by Kelvin Nina and used with permission of the author. All other rights reserved by the author.

Getting Solid with SOLID

This week I read a blog post titled “SOLID Principles every Developer Should Know” by Chidume Nnamdi. Chidume taught me the important things I should know about SOLID and gave good examples to help make it more clear. SOLID stands for Single Responsibility Principle, Open-Closed Principle, Liskov Substitution Principle, Interface Segregation Principle, and Dependency Inversion Principle.

The Single Responsibility Principle is a class should only have one job. This is because if a change is made it can affect more than just the one class that was meant to be altered. The Open-Closed Principle is software entities such as classes, modules, and functions should be open for extension, not modification. This makes sense especially with dealing with larger programs. If something new has to be added to a function it makes more sense to extend it than to edit the function. In larger programs, the function might need to be edited a lot which could fill it with if statements which can get messy. The Liskov Substitution Principle is that a sub-class must be substitutable for its super-class. This basically means if a subclass is called instead of a parent class the code still should still work properly. The Interface Segregation Principle is “Make fine-grained interfaces that are client specific”. This is done so that clients are not being given interfaces that contain parts they do not use. Finally, there is the Dependency Inversion Principle. This principle is that dependency should be on abstraction, not concretions. This basically means abstractions should not depend on details and that details should depend upon abstractions.

I selected this blog because it seemed like it was easy enough to understand and had code examples to demonstrate the principles in action. There was a lot of information to gather from this one blog post but it was well worth the read. Chidume does a good job explaining each principle and the pieces of code uses throughout the post helped clarify any confusion. I think following these principles will be a big help in the future when coding, especially larger programs that could easily become messy if I did not follow the SOLID rules. I think it will take some time and practice to be able to implement these rules in all my programs but it will be worth doing in the long run. I think that focusing on one rule at a time will be the easiest way to master these principles since it looks like it can be a lot at once if all of them are needed in a single program. 

Link: https://blog.bitsrc.io/solid-principles-every-developer-should-know-b3bfa96bb688

From the blog CS@Worcester – Ryan Klenk's Blog by Ryan Klenk and used with permission of the author. All other rights reserved by the author.

Week 4 – SOLID Principles

For this week, I decided to look at FreeCodeCamp’s article on SOLID principles. This article goes deep in depth on each of SOLID’s principles, which are the Single Responsibility Principle(every class should have one job), the Open-Close Principle(open to extension/closed to modification), the Liskov Substitution Principle(subclasses should be substitutable for their base classes), the Interface Segregation Principle(many specific interfaces are better than a general interface), and the Dependency Inversion Principle(classes should depend on interfaces or abstract classes versus concrete classes and functions). The article describes each principle, and gives examples of each principle in action, which is why I chose this article.

I found it really helpful to see physical code, how the code violated the principle, and how to fix the code to make it fall in line with the principles. I also found it very helpful when the article explained the common mistakes with each principle and how to avoid them. The article overall made it very easy to understand each of the principles and described them in a casual way. FreeCodeCamp is a non-profit organization aimed at beginning coders and developers to help them understand coding concepts.

In the future I will take note when designing my code to ensure that it falls in line with the SOLID principles to avoid my code becoming too complex and opaque. This will allow myself and anyone reading my code to be able to understand it and extend it if necessary, not modify it, and the Open-Closed principle aims at making code open to extension but closed to modification.  Looking back at code I have written in previous years, they do not follow these principles at all.

Link: https://www.freecodecamp.org/news/solid-principles-explained-in-plain-english/

From the blog CS@Worcester – Noelan Chabot's Blog by nchabot1 and used with permission of the author. All other rights reserved by the author.

The Importance of Concurrency

The computer has gone a long way compared to now. Modern computers have several CPU cores or CPUs. We utilize these cores to create high-volume applications. This week I read a blog discussing concurrency in programming. This blog, “Concurrent Programming – Introduction” by Gowthamy Vaseekaran, defines concurrency as the ability to run several programs or several parts of a program in parallel. Vaseekaran then goes further by saying that programs that take longer to perform certain tasks can benefit from using concurrency and that tasks can be done in parallel or asynchronously. This will, for the most part, upgrade the performance of the program. Vaseekaran also goes on to say that computers didn’t have operating systems back in the day, so single programs were executed from start to end. These programs had access to all the resources of the machine. Nowadays, executing a single program at a time is seen as an inefficient use of expensive and a waste of computer resources.

 Several factors led to the development of operating systems that allowed multiple programs to run, such as resource utilization, the author explains that programs must wait for external operations, so using that time to let another program run was way more efficient. Fairness would allow multiple users and programs to have equal claims on a machine’s resources. It is fairer to let them share the computer rather than having one program run from start to end and then another. Also having convenience is very neat, so that several programs can coordinate to perform a single task. It’s interesting while reading this because currently I’m taking a class on algorithms, and so in this class, we will discuss how some programs will take longer than others. There is the worst solution to how a problem or program in this case should be run, and the best solution. Concurrency could fit into making programs run more efficiently.

 The author then goes on to discuss computer threads which are a facility to allow multiple activities within a single process, a series of executed statements, a nested sequence of method calls, etc. We use threads to help perform background or asynchronous processing. The thread takes advantage of multiprocessor systems, and it simplifies program logic when there are multiple independent entities. Java will utilize threads very often. Every Java program creates at least one thread.

Threads can also pose risks; the main problem is the shared variable/resource problem. Solutions for this problem include not sharing any variables, making variables immutable which is the process of making variables unchangeable to their value or state, and using a lock. A lock is a thread synchronization mechanism in java. Another problem includes race condition which is the most common concurrency correctness problem, which pays attention to compound actions, which is when two threads access a shared variable at the same time. Vaseekaran also goes on to explain deadlocking which is a condition where two or more threads are blocked forever, waiting for each other. Deadlocks are caused by inconsistent lock ordering and limitation of resource capacity when a thread is waiting for another lock.

It’s very interesting to see how important concurrency is when it comes to making or even running programs, it brings a whole new understanding of how modern programs work. It’s also interesting to hear about terms such as “deadlock” because it’s a refresher of what it means and what role it plays when talking about concurrency. Reading about how computers used to run programs gives me a new perspective on how these programs run within a system and seeing how solutions were created so that we can run programs more efficiently. When making software I want to come back to this, knowing that one of these days’ problems such as deadlocking or shared variables will happen to me and so using the solutions Vaseekaran has listed in the post they wrote will help me a ton.

Link to “Concurrent Programming – Introduction”: https://gowthamy.medium.com/concurrent-programming-introduction-1b6eac31aa66

From the blog CS@Worcester – FindKelvin by Kelvin Nina and used with permission of the author. All other rights reserved by the author.

Object-Oriented Programming

Object Oriented Programming is a topic that I wanted to brush up on because it has been a long time since I have programmed with it or learned about it.

 I read a blog by Omar Elgar called “The Story of Object-Oriented Programming” which was a great help with relearning the terms and use of object-oriented programming. Omar goes over the major aspects of object-oriented programming which are objects, abstraction, encapsulation, inheritance, and polymorphism. Omar explains objects in programming are meant to represent real-world objects like a car, phone, etc. When Omar gets to abstraction, he sums it up by saying it is focusing on common properties and behaviors of objects and getting rid of what is not important. For encapsulation, Omar explains it is breaking down a program into small mini-programs in the form of classes as well as hiding content that is not necessary to expose. Inheritance is when we take the common properties we created in an abstract class and apply them to the class that is more specific. The last topic Omar covered was polymorphism which is when an object takes the shape of many different forms. Omar even gives some code at the end of his post to show the readers a real example of object-oriented programming being used which is a nice touch and helps to see.

I chose this blog post to read because I wanted to get a clearer understanding of object-oriented programming and how it worked. I have coded using object-oriented programming in the past, but I do not think I had completely understood it and how useful it can be until now. Omar did a great job going over all the important pieces of object-oriented programming and explained them in an easy to understand way.  After reading his blog I feel like I got exactly what I was looking for out of his work. I will be able to apply what I learned from Omar’s blog to my programming and be able to effectively explain object-oriented programming to someone else if I needed to. I have a feeling that having a strong understanding of how object-oriented programming works is going to be important to have not only for my university classes but my future career as a software developer. I would highly recommend this blog post to someone who needs a straightforward and easy to understand overview of object-oriented programming.

Link: https://medium.com/omarelgabrys-blog/the-story-of-object-oriented-programming-12d1901a1825

From the blog CS@Worcester – Ryan Klenk's Blog by Ryan Klenk and used with permission of the author. All other rights reserved by the author.