Category Archives: CS-343

Refactoring code!

Before our more recent classes where we learned about the Singleton and Simple Factory pattern, I knew about the concepts of refactoring, but in a different light. Rather than using these patterns and methods to make my code more efficient through a pattern of sorts, I consider refactoring as something more akin to making my code more readable and more efficient by breaking down useless portions of code. This isn’t exactly wrong, but it’s not completely right either!

Mainly, because though these approaches were important, they could be done even better! I main thing that I was missing in my own refactoring was obviously the design smells that we have recently went over now. Intuitively, I understood a few of the smells already, but now that I have a better grasp of many of them, it is clear that my older refactoring efforts were missing plenty of things!

For example, I used to clean up code and make it easier to understand by breaking them into chunks, which is good, but at the same time, I would also condense other areas since they seemed to ‘fit.’ It is clear now that I should NOT do that, but you live and you learn!

I think that the biggest resource that I found so far, is this website! Though it is basic and even a bit barebones, it gives a pretty good introduction to refactoring. It makes sure to consider things such as cleaning up dirty code from inexperience, a refactoring process, the design/code smells, and techniques to refactor as well as show off a few different design patterns!

One thing that might be helpful is that the website contains many different images to help break concepts down and even code examples! I know for me, it helps tremendously when I can look at some code and see exactly what is happening! Though I do understand the concepts a lot of the time, I find that I learn things a lot more quickly if I can look at the source and break things down myself! The website even has plenty of different coding languages too, so if say, Java isn’t your best, you can look at the different patterns in C#, Python, etc!

Overall though, I think that learning about a more refined structure to refactor code and implement these various design patterns will help me tremendously! I know that there have been times where I have gone to look at my old code and I’m sitting there lost, confused, and asking “Who did that!” Hopefully, with this in mind, I’ll have a better understanding of how to proceed with my older code past and future!

From the blog CS@Worcester – Bored Coding by iisbor and used with permission of the author. All other rights reserved by the author.

Getting a Grasp on GRASP

The topic I wanted to cover this week is GRASP. GRASP stands for General Responsibility Assignment Software Patterns. I read a blog post by Kamil Grzybek called “GRASP – General Responsibility Assignment Software Patterns Explained”. He does mention his post is inspired and based on Craig Larman’s book “Applying UML and Patterns: An Introduction to Object-Oriented Analysis and Design and Iterative Development”.

Grzybek explains GRASP is a lesser-known set of rules that can be used in object-oriented programming and there are 9 patterns to follow within GRASP. The nine patterns are the following.

1. Information Expert
2. Creator
3. Controller
4. Low Coupling
5. High Cohesion
6. Indirection
7. Polymorphism
8. Pure Fabrication
9. Protected Variations

Grzybek goes through each pattern, explaining them, and gives some code from Larman’s book to help give the reader a better understanding of the uses of the principles.  The information expert is to assign the responsibility to the class that has the information needed to fulfill it. The creator pattern decides which class should create an object. A class needs to have at least one of the following responsibilities to be a creator, contain or compositely aggregates of the object, records the object, closely uses the object, or initializes data for the object. The controller pattern is to assign the responsibility of controlling a system operation to an object that represents the overall system or use case scenario. Low coupling is a pattern where responsibilities are assigned with the idea of reducing the impact of change by lowering coupling. The high cohesion pattern is used to keep objects focused and easy to use and understand by making sure all responsibilities of an element are related. Indirection is used to avoid coupling by making an intermediate object between two components. Polymorphism is how alternative types of classes are handled by using polymorphi operations. Pure fabrication is making a new class when it is hard to figure out where responsibility should be placed. The last pattern-protected variations is to create stable interfaces around points of variation or instability.

I chose this topic because I thought learning more patterns that could improve my code would be beneficial to me. SOLID and GRASP are the first two sets of rules I have learned about but there are plenty more that Grzybek mentions at the end of his post so perhaps one day I will learn about those too.  I did find GRASP to be a bit more confusing than SOLID to understand but I am glad I had a chance to read about it. The main conclusion I took away from this post is the management of responsibilities in software is crucial to creating a high-quality architecture and code that can easily be changed. Grzybek ends his post with the words “the only thing that is certain is change. So be prepared”, which really emphasizes his point that software must be flexible. I will have to keep this in mind when developing code in the future.

Link: http://www.kamilgrzybek.com/design/grasp-explained/

From the blog CS@Worcester – Ryan Klenk's Blog by Ryan Klenk and used with permission of the author. All other rights reserved by the author.

Blog Post Number 1

This week is read a blog post on “design patterns in programming- how and when to use them” this blog post was written by Ajibola Ojo from the blog called DevGenius. He did a really great job in explain it in a way were the reader had an easy way of understanding the concept about the design patterns that make it easier to develop in more suitable solutions to problems. A few of the solutions he included but were not limited to was a  optimal approach for rewriting the same code, he talks about using packages.  He explained that design patterns aren’t rigid, they are like a set of guidelines for solving problems.  In this specific blog post he concentrates a lot of state design pattens. He uses a simple code to show an example of how to use the Finite State Machine. Which is an abstract machine. His example of code has an” idle, receiving, calling and on call” he goes through all the possibility of connecting all the classes. By building and using the stat designs patterns, you can visualize a problem as a finite state machine and then be able to translate that into code. With his classes he ended up with a four possible state which the transitions can happened. He later goes on to show all the connections like example caller A and caller B are idle so if caller a tries to call caller b caller a state moves to calling while caller b moves to receiving. The reason why picked this blog post was that in the midst of reading a lot of blogs post this week, I found that, this one even though it was shorter than most of them, that we were easier reads and straight to the topic, which is something I really appreciate. I liked how the also included diagrams and snips of code which made it easier for me to be able to connect what the writer was talking about and actually visualize everything he was saying. Sometimes when working on coding projects I tend to not diagram. But now looking at the way that it was explained really helped me visualize the importance. Even though it sometimes can be tedious I now can understand how it helps involve interactions between the classes. Hopefully with more practice on using the state design pattern, I will learn to like and appreciate the process and master it.

View at Medium.com

From the blog CS@Worcester – CS- Raquel Penha by raqpenha and used with permission of the author. All other rights reserved by the author.

Emerging Technology: PolyGloT

I found an academic conference article describing a new eTutoring system, called PolyGloT. I find that technology serves many different needs to varying degrees of success. Specifically, in the aspect of education technology can either be additive or a burden or sometimes even a mix of both. The one size fits all approach to education has varying degrees of success as it does not always accommodate the needs of a specific student.  In so far that one way of education may not be the most suitable way for an individual to truly grasp a subject for the purpose of practical understanding under the goal of career development.

I found that for myself specifically, trying to conform to a system has only gotten in the way of my actual learning and after four years I’ve only just begun to realize what education means to me and how I truly learn. Fortunately, technological educational resources have filled in the gaps and often been additive to my education. Outside of education aspects of computer games and social networks harbored creativity and arguably social benefits like what the current education system may be successful at. So, what is PolyGloT? PolyGloT is a personalized and gamified eTutoring system in its early development phase, this platform aims to help accommodate neurodiversity in students within a system that is currently a one size fits all approach, PolyGlot does this by “provid[ing] an open, content-agnostic and extensible framework (see [below] for its architecture) for designing and consuming adaptive and gamified learning experiences.”

On the topic of my own education, I have found that practical, hands-on approaches have been the most optimal way for me to truly grasp a subject and understand its application. For this reason, seeing that this article actively applies principles being taught adds a confirmation to what I’m being taught and even piques my interest. Specifically, the architectural design helped me understand the practical use of this tool. Seeing it “out in the wild” helped me grasp its capabilities and purposes almost vicariously, allowing me to see its potential and use it for myself. Which leads me to the question of whether their design is based on an existing design pattern or can they just be created for whatever purpose fits my needs.

Although we have yet to cover the topic of frontend and backend in our CS-343 class, this topic stands out to me as I see that its is the deciding factor in whether a system achieves its purpose successfully. In the case of PolyGloT I think that this correlation is key in allowing teachers to effectively teach a neurodiverse class. What is clear from our activities in class is that designing a solid architecture is necessary in having an efficient and working system. Similarly, the structure of this course is one that is allowing me to grasp subjects and more importantly replicate a structural aspect of software development as a profession. I see this course as the architecture to my career in computer science and its practical replication allows for a solid foundation.

https://arxiv.org/abs/2210.15256

From the blog CS@Worcester – Sovibol's Glass Case by Sovibol Keo and used with permission of the author. All other rights reserved by the author.

Context, Containers, Components and Code

This week I stumbled upon an article regarding different software architectural diagrams. An architectural diagram is a visual representation that maps out the physical implementation for components of a software system. Other than the typical UML diagram, a different way to effectively communicate how you are planning to build a software system or how an existing software system works is with the C4 model. The C4 model, which stands for context, containers, components, and code, is a set of hierarchical diagrams that you can use to describe your software architecture at different zoom levels. Each case can be useful for different types of audiences. As developers, we can envision this model as a Google Map for our code. In order to create a map of our code, we would first need a common set of abstractions to describe the static structure of a software system. With the C4 model, we can consider the static structure of a software system into just terms of containers, components, and code, along with taking into consideration the people who use the software system.

We can break the C4 model into four different levels or steps, in which each level is adding something new to the diagram. Level one, is a simple system context diagram that shows the software system that you are building and how that would fit into the world in terms of the people who use it and the other software systems it interacts with. Level two consists of a container diagram. This zooms more into the software system and shows the containers that would make up that specific system. Containers could include applications, data stores, micro-services, and more. Level three, which is a component diagram, really dives into an individual container to show what the components are inside of it. These components provide us with a map to the real abstractions and groupings of code in our codebase. Lastly, Level four is based solely on coding. Here we can jump into an individual component to show how that specific component is implemented. It can really help show that the components can be made up of a number of classes with the implementation details directly reflecting the code.

The C4 model can be a simple, yet effective way to communicate software architecture at multiple different levels of abstraction. It can also be used as a way to introduce a different modeling technique to software development teams.

https://www.infoq.com/articles/C4-architecture-model/

From the blog CS@Worcester – Conner Moniz Blog by connermoniz1 and used with permission of the author. All other rights reserved by the author.

What is the Abstract Factory Design Pattern?

Learning new design patterns can be interesting because, for the most part, it explores a new way of coding that I thought I would never have to use. This week I’ve been refactoring design patterns in my code, such as the Singleton pattern. The Singleton pattern is a design pattern that is supposed to restrict the instantiation of a class to one object. Basically, it’s used so that when that object is created then we don’t have to recreate that object. Recreating that object would wipe clear the information that you want that object to hold. That is where the Singleton Pattern shows up! But for now, I want to go over the Abstract Factory Design Pattern. The Abstract Factory Pattern is almost like the Factory Pattern but according to the blog “Abstract Factory Design Pattern in Java” published by Pankaj, the Abstract Factory Design is like a factory of factories.

If familiar with the factory design pattern, it uses a single Factory class, this factory class returns different subclasses based on the inputs provided and the factory classes use if-else or switch statements to find out the class it’s supposed to bring up. In the Abstract factory pattern, the if-else statement or switch statements are thrown out the window and instead, we have a factory class for each sub-class. And then we have an Abstract Factory class that will return the sub-class based on the input factory class. So the Abstract Factory uses multiple sub-classes of the factory class which is then implemented into a superclass which would be the abstract class.

The Abstract Factory will use a more interface and extension approach rather than implementation. In contrast to the factory method, all the subclasses are basically put into one class which would be the factory class, and that one factory class was the class to call if a subclass needed to be reached. Now we have all these classes that represent these subclasses that can be changed or coded to our liking. So it’s a lot more robust because we don’t have to be hammered down by conditional statements, but since we don’t really have that simplicity anymore, things do tend to be a lot more complicated. It’s a lot more complicated because of the many classes that can be involved in making a factory and plus the many specifics and ideas that have to go into those sub-factory classes I guess you could call it.

It would be really interesting to implement this in one of my projects. Right now I’m making modifications to a game called Minecraft, you might have heard of it, it’s a pretty popular game. It would be interesting to create an Abstract factory design for the many tools or blocks that I’ll be adding to the game. It might seem complicated but it could help me better organize the mod a lot better.

Link to Blog “Abstract Factory Design Pattern in Java” by Pankaj:

https://www.digitalocean.com/community/tutorials/abstract-factory-design-pattern-in-java

From the blog CS@Worcester – FindKelvin by Kelvin Nina and used with permission of the author. All other rights reserved by the author.

You ain’t going to need it? (YAGNI)

YAGNI! You ain’t going to need it!

Now, someone might wonder why that stuck out to me? Well, it’s because of I often tend to have the thoughts to ‘future proof’ things! And, this is in direct contrast with YAGNI. Not only do I try to future proof things in coding, but I tend to do in in a variety of different fields.

Building a new computer? Let’s shell out a few extra dollars to make sure I can upgrade my storage in the future.
Making a new base in Minecraft? I’ll clear out this space so I have room to expand!
Buying groceries? Let’s just pick up this ingredient in case I might need it!

When has that helped me? Well, actually pretty often! But… In the grand scheme of things, the ratio is probably quite small. If I had to put a number to it, I’d say… It’s probably…

1/10

Potentially even less! So why the heck am I still stuck with this idea of future proofing! No idea! But anyways, to get back on topic, YAGNI seemed useful in not just my coding, but everywhere, so I started to look more into it! In my search for a grander understanding, I discovered this post by Martin Fowler. And boy, oh boy, did Martin teach me a lot of things!

Firstly, it makes sense! The concept of future proofing, or preparing for something that you will eventually need seems sound, right? But, things change! Especially when it comes to specifications and needs. Things might seem set in stone one day, but in the next, something might have changed and the requirements become different! Any and all investment and work you put into that ‘feature’ will be wasted, so don’t think about it until you need it!

In my own cases, I can definitely say if I just had a more YAGNI approach, I’d save myself countless hours and at least a small pile of money! Plans are plans are a reason! Just because I think I’ll want it later, doesn’t mean I’ll actually want it when the time comes!

That computer I future proofed? Never did it!
That space I cleared out for an expansion in Minecraft? Never expanded it!
That one ingredient I picked up? Never used it!

These have just been my experiences, but I am certain that at least some people can understand where I’m coming from. It really does seem to make sense to prepare for the future, right? But if you’re really preparing for the future, you better make sure you absolutely are going to use it! This is certainly a lot easier for real life things, but when it comes to coding and meeting specifications, that is just so much harder! You probably have about the same chance of looking into a crystal ball and reading your own future as you do thinking a feature will be needed! So…

Don’t do it! You ain’t going to need it!

From the blog CS@Worcester – Bored Coding by iisbor and used with permission of the author. All other rights reserved by the author.

Getting better and better

As we have been Progressing through the weeks and we work on the pogils as well as the homework I find myself slowly getting the hang of the work flow as well as utilizing VSCode, I for once understand what is going on in the classes and am eager to continue to learn as it finally is being pieced together in my head. the biggest part would probably have to be the feedback that I receive and probably mostly everyone else gets as well its not some random jargon about how something is wrong but it seems like a guiding hand to get you to the solution that you aren’t to far from. the biggest take away I have gotten as organizing the UML diagrams on the Homework really connected the dots on how these classes and models work together better than it ever has , I can understand paths a little better than I did before.

I only hope I continue to understand the work I continue to do, and im eager to see what work I can get done.

From the blog cs@worcester – Marels Blog by mbeqo and used with permission of the author. All other rights reserved by the author.

YAGNI!

While looking through my blogs, I came across a familiar acronym that I used all the time when it comes to developing software and system. The acronym is called “YAGNI”, which stands for “You Ain’t Gonna Need It” according to the blog “Automation Principles – YAGNI/Premature Optimization, It’s the principle of extreme programming that states a programmer should not add functionality until deemed necessary. The blog takes about how many engineers will spend multiple hours trying to build the “right system” the first time. In some cases, trying to build a flawless system in the first go can be rather difficult to achieve. The problem is that programmers spend too much time worrying about efficiency in the wrong places and having that premature optimization can cause more harm than good. The blog goes over Big- O notation which explains that it does not care about constants but the long-term growth rate of functions. This is a good rule to consider because having to introduce something before a fraction of the code is even written can make a program a lot more difficult to support as explained, it would increase design considerations, the likelihood of race condition, and the ability to troubleshoot. Optimizing certain processes might not lead to any time savings or real optimization. In fact, it could do the exact opposite, a good example that the blog states are when using Python, constructing lambdas and list comprehensions over simple for loops. The blogger has mentioned that in his personal experience he would add non-functional requirements, such as authentication and logging, too early, adding features before needed. With that being said, I remember spending so much time on adding the ability to connect my bank to my finance application, that I didn’t have the time to code the application itself. The blogger talks about network automation which explains more about how networking is all about speed and not creating YAGNI isn’t really in the cards. They would go into detail about real-world examples when it comes to the network automation process, explaining issues about multithreading, in which he explains that overloading the TACACS server with too many requests at once is very problematic, or scaling wide too fast can cause processes to slow down and too much resource utilization, overall, it’s very inefficient. Configuration Generation takes too long and is very inefficient, and with all these in mind, the blogger isn’t trying to not consider tomorrow’s problems but is more in line with building things up as they go.

“Automation Principles – YAGNI / Premature Optimizations” :

https://blog.networktocode.com/post/Principle-YAGNI/

From the blog CS@Worcester – FindKelvin by Kelvin Nina and used with permission of the author. All other rights reserved by the author.

Semantic Versioning (There’s a standard?!?!)

Being the person that I am, I made my own rules without being influenced by anyone else. For me, I simply though that

Well… It’s my code, right? That means I can name things however I want, right? So if I said this is version 0.9, then it’s version 0.9! Or if this is version 2.0, then it’s 2.0!

In response, my friend informed me of a few soft rules, since he was more familiar with versioning through an internship opportunity than I was, and he shed some light on a few details. Like how versioning is not just a made up number that you assign to your project/code. Hearing this, I was honestly pretty confused, but I looked into it and discovered Semantic Versioning!

And wow, did that blow me away. Looking back at the way I approached things, I was no different from a headless chicken running around! Learning about Semantic Versioning, through this video, it is both so straightforward but also so useful. No longer will I have my versions be numbered willy-nilly, nor will I even forget my own versions and what happened in this update! It’ll finally make sense!

To break down Semantic Versioning, consider versions as three parts!

The Major version, so think 1.0.0
The Minor version, such as 0.1.0
And finally, patches, like so 0.0.1

Major versions are, well, major! These are versions that leap forward and have large changes! Making a major version usually implies that the code is changed in such a way that it will no longer work with older versions. These changes might be large architectural changes, or other large scale changes that usually break the existing structures or APIs.

Minor versions are NOT as massive (duh), but they come with plenty of other things compared to, say, patches! Minor versions could be added features, or updates to existing features, and most of the time, nothing should break. Functionality that existing before should still work, and further development of new features shouldn’t impact or break anything either (ideally).

Patches are like bandages! These are where the bug fixes are! If something is broken in an unintended way, that’s when patches come in, and they should be used to iron out problems that were not accounted for. It could be something minor like the output of some code is a 2 instead of a 1, or it could be a bit more major like a whole portion of the code crashes when run, but patches are for fixes!

As an example, I’ll run through a brief bit of versioning on a made up project! Let’s say… I’m making a program to print out a leaderboard. It connects to an online library to gather scores, and then displays it to whoever connects to it.

After long and grueling work, I did it! I finally completed it and so, I release…
Version 1.0.0
It’s done and completed, but then… It turns out, my leaderboard only prints out the WORST scores! Dang it, I flipped the list around by mistake! So off I go, fixing this bug. After everything is working again, my next version should be 1.0.1.
But then… I think to myself, wouldn’t it be cool if I added region support too? Then we could see where this score is from, and potentially even group regions by high scores!
So, after adding this new feature, my next version will be 1.1.0!
Pretty straightforward, right? It’s really clear what is going on too!
Let’s say another bug is introduce, bam, 1.1.1!
Then more features, 1.2.0
And finally another bug is squashed! 1.2.1
And… another bug ! 1.2.2
But… then wait, there’s another bug. 1.2.3
Things feel good and everything works, but I learn about a whole new library that does what I want it to do, and gives me more room for growth. So rework the whole program with this new library! The old stuff still works, but since I’m switching to a whole new system, this version will no longer work with the old one, and now, this is when I jump a major version.
We’re now on version 2.0.0!

I hope that example made sense! There is more to semantic versioning as well, but that is the quick break down. Introducing things like alpha/beta and also ^ and ~ adds some other intricacies, but for now… Let’s not think about them! Alphas/betas are optional for testing purposes, and to be honest, ^ and ~ are a bit out of my grasp. I don’t fully understand them yet! I’ll look into them more, and maybe I’ll have a future update post detailing what I’ve discovered!

From the blog CS@Worcester – Bored Coding by iisbor and used with permission of the author. All other rights reserved by the author.