Blog Post Number 1

This week is read a blog post on “design patterns in programming- how and when to use them” this blog post was written by Ajibola Ojo from the blog called DevGenius. He did a really great job in explain it in a way were the reader had an easy way of understanding the concept about the design patterns that make it easier to develop in more suitable solutions to problems. A few of the solutions he included but were not limited to was a  optimal approach for rewriting the same code, he talks about using packages.  He explained that design patterns aren’t rigid, they are like a set of guidelines for solving problems.  In this specific blog post he concentrates a lot of state design pattens. He uses a simple code to show an example of how to use the Finite State Machine. Which is an abstract machine. His example of code has an” idle, receiving, calling and on call” he goes through all the possibility of connecting all the classes. By building and using the stat designs patterns, you can visualize a problem as a finite state machine and then be able to translate that into code. With his classes he ended up with a four possible state which the transitions can happened. He later goes on to show all the connections like example caller A and caller B are idle so if caller a tries to call caller b caller a state moves to calling while caller b moves to receiving. The reason why picked this blog post was that in the midst of reading a lot of blogs post this week, I found that, this one even though it was shorter than most of them, that we were easier reads and straight to the topic, which is something I really appreciate. I liked how the also included diagrams and snips of code which made it easier for me to be able to connect what the writer was talking about and actually visualize everything he was saying. Sometimes when working on coding projects I tend to not diagram. But now looking at the way that it was explained really helped me visualize the importance. Even though it sometimes can be tedious I now can understand how it helps involve interactions between the classes. Hopefully with more practice on using the state design pattern, I will learn to like and appreciate the process and master it.

View at Medium.com

From the blog CS@Worcester – CS- Raquel Penha by raqpenha and used with permission of the author. All other rights reserved by the author.

The 100% Availablity Trap: UweFriedrichsen

CS@Worcester
 

CS-343
 

Week-7

Why I chose this post

As we have progressed with the course material we have seen the advantages that microsystems have over monolithic structured systems. These advantages have shown to overcome most of the limitations that monolithic structures exhibited. However, microsystems or distributed softwares have some short comings of their own. In this blog the author focuses on some of the errors that may arise within a distributed software.This would shed some light on some of the cons of microsystems.

Summary of the post

The author begins by speaking about the 100% availablity trap which most developers fall into. This trap is where developers, “ assume that all remote peers, icluding infrastructure tools like databases, message brokers and alike , are available 100% of the time, i.e., that they are never down.” The author breaks down why a distributed system cannot be available for 100% of the time. A system is not available if it does not respond, responds slower that specified or provides a wrong response. The author then mentions the “nines” which looks at what a 90% and above availability looks like for a distributed system. To get to these levels of availability there is a lot of consideration that goes into play, from patch and maintenance days to managing test and runtime which will drastically increase to achieve this level of availablity. Not forgetting the costs and efforts.

The author then alludes to the fact that with increasing number of nodes there is an expected decrease in the availability of the system as well. Also another factor to cosnsider is the network that facilitates communication between the nodes. The network must be in perfect condition to ensure perfect availability which is very hard to actualize.

Finally the author ends with this statement. “The question is not, if failures will hit you in a distributed system landscape.The only question left is, when and how bad they will hit you.”

Reflections and application

Despite the countless number of advantages that a distributed system has, I have realized that they cannot achieve perfection. “Every dog has its own fleas.” The author has clearly shown me how this is true. The final remark that he has also brings in a perspective of always being ready to experience an issue. These concepts that I have learnt I’m sure will prove valuable and important in my career life and even as I work on projects that involve microservices.

From the blog De Arrow's Webpage by Samuel Njuguna and used with permission of the author. All other rights reserved by the author.

Emerging Technology: PolyGloT

I found an academic conference article describing a new eTutoring system, called PolyGloT. I find that technology serves many different needs to varying degrees of success. Specifically, in the aspect of education technology can either be additive or a burden or sometimes even a mix of both. The one size fits all approach to education has varying degrees of success as it does not always accommodate the needs of a specific student.  In so far that one way of education may not be the most suitable way for an individual to truly grasp a subject for the purpose of practical understanding under the goal of career development.

I found that for myself specifically, trying to conform to a system has only gotten in the way of my actual learning and after four years I’ve only just begun to realize what education means to me and how I truly learn. Fortunately, technological educational resources have filled in the gaps and often been additive to my education. Outside of education aspects of computer games and social networks harbored creativity and arguably social benefits like what the current education system may be successful at. So, what is PolyGloT? PolyGloT is a personalized and gamified eTutoring system in its early development phase, this platform aims to help accommodate neurodiversity in students within a system that is currently a one size fits all approach, PolyGlot does this by “provid[ing] an open, content-agnostic and extensible framework (see [below] for its architecture) for designing and consuming adaptive and gamified learning experiences.”

On the topic of my own education, I have found that practical, hands-on approaches have been the most optimal way for me to truly grasp a subject and understand its application. For this reason, seeing that this article actively applies principles being taught adds a confirmation to what I’m being taught and even piques my interest. Specifically, the architectural design helped me understand the practical use of this tool. Seeing it “out in the wild” helped me grasp its capabilities and purposes almost vicariously, allowing me to see its potential and use it for myself. Which leads me to the question of whether their design is based on an existing design pattern or can they just be created for whatever purpose fits my needs.

Although we have yet to cover the topic of frontend and backend in our CS-343 class, this topic stands out to me as I see that its is the deciding factor in whether a system achieves its purpose successfully. In the case of PolyGloT I think that this correlation is key in allowing teachers to effectively teach a neurodiverse class. What is clear from our activities in class is that designing a solid architecture is necessary in having an efficient and working system. Similarly, the structure of this course is one that is allowing me to grasp subjects and more importantly replicate a structural aspect of software development as a profession. I see this course as the architecture to my career in computer science and its practical replication allows for a solid foundation.

https://arxiv.org/abs/2210.15256

From the blog CS@Worcester – Sovibol's Glass Case by Sovibol Keo and used with permission of the author. All other rights reserved by the author.

Context, Containers, Components and Code

This week I stumbled upon an article regarding different software architectural diagrams. An architectural diagram is a visual representation that maps out the physical implementation for components of a software system. Other than the typical UML diagram, a different way to effectively communicate how you are planning to build a software system or how an existing software system works is with the C4 model. The C4 model, which stands for context, containers, components, and code, is a set of hierarchical diagrams that you can use to describe your software architecture at different zoom levels. Each case can be useful for different types of audiences. As developers, we can envision this model as a Google Map for our code. In order to create a map of our code, we would first need a common set of abstractions to describe the static structure of a software system. With the C4 model, we can consider the static structure of a software system into just terms of containers, components, and code, along with taking into consideration the people who use the software system.

We can break the C4 model into four different levels or steps, in which each level is adding something new to the diagram. Level one, is a simple system context diagram that shows the software system that you are building and how that would fit into the world in terms of the people who use it and the other software systems it interacts with. Level two consists of a container diagram. This zooms more into the software system and shows the containers that would make up that specific system. Containers could include applications, data stores, micro-services, and more. Level three, which is a component diagram, really dives into an individual container to show what the components are inside of it. These components provide us with a map to the real abstractions and groupings of code in our codebase. Lastly, Level four is based solely on coding. Here we can jump into an individual component to show how that specific component is implemented. It can really help show that the components can be made up of a number of classes with the implementation details directly reflecting the code.

The C4 model can be a simple, yet effective way to communicate software architecture at multiple different levels of abstraction. It can also be used as a way to introduce a different modeling technique to software development teams.

https://www.infoq.com/articles/C4-architecture-model/

From the blog CS@Worcester – Conner Moniz Blog by connermoniz1 and used with permission of the author. All other rights reserved by the author.

What is the Abstract Factory Design Pattern?

Learning new design patterns can be interesting because, for the most part, it explores a new way of coding that I thought I would never have to use. This week I’ve been refactoring design patterns in my code, such as the Singleton pattern. The Singleton pattern is a design pattern that is supposed to restrict the instantiation of a class to one object. Basically, it’s used so that when that object is created then we don’t have to recreate that object. Recreating that object would wipe clear the information that you want that object to hold. That is where the Singleton Pattern shows up! But for now, I want to go over the Abstract Factory Design Pattern. The Abstract Factory Pattern is almost like the Factory Pattern but according to the blog “Abstract Factory Design Pattern in Java” published by Pankaj, the Abstract Factory Design is like a factory of factories.

If familiar with the factory design pattern, it uses a single Factory class, this factory class returns different subclasses based on the inputs provided and the factory classes use if-else or switch statements to find out the class it’s supposed to bring up. In the Abstract factory pattern, the if-else statement or switch statements are thrown out the window and instead, we have a factory class for each sub-class. And then we have an Abstract Factory class that will return the sub-class based on the input factory class. So the Abstract Factory uses multiple sub-classes of the factory class which is then implemented into a superclass which would be the abstract class.

The Abstract Factory will use a more interface and extension approach rather than implementation. In contrast to the factory method, all the subclasses are basically put into one class which would be the factory class, and that one factory class was the class to call if a subclass needed to be reached. Now we have all these classes that represent these subclasses that can be changed or coded to our liking. So it’s a lot more robust because we don’t have to be hammered down by conditional statements, but since we don’t really have that simplicity anymore, things do tend to be a lot more complicated. It’s a lot more complicated because of the many classes that can be involved in making a factory and plus the many specifics and ideas that have to go into those sub-factory classes I guess you could call it.

It would be really interesting to implement this in one of my projects. Right now I’m making modifications to a game called Minecraft, you might have heard of it, it’s a pretty popular game. It would be interesting to create an Abstract factory design for the many tools or blocks that I’ll be adding to the game. It might seem complicated but it could help me better organize the mod a lot better.

Link to Blog “Abstract Factory Design Pattern in Java” by Pankaj:

https://www.digitalocean.com/community/tutorials/abstract-factory-design-pattern-in-java

From the blog CS@Worcester – FindKelvin by Kelvin Nina and used with permission of the author. All other rights reserved by the author.

You ain’t going to need it? (YAGNI)

YAGNI! You ain’t going to need it!

Now, someone might wonder why that stuck out to me? Well, it’s because of I often tend to have the thoughts to ‘future proof’ things! And, this is in direct contrast with YAGNI. Not only do I try to future proof things in coding, but I tend to do in in a variety of different fields.

Building a new computer? Let’s shell out a few extra dollars to make sure I can upgrade my storage in the future.
Making a new base in Minecraft? I’ll clear out this space so I have room to expand!
Buying groceries? Let’s just pick up this ingredient in case I might need it!

When has that helped me? Well, actually pretty often! But… In the grand scheme of things, the ratio is probably quite small. If I had to put a number to it, I’d say… It’s probably…

1/10

Potentially even less! So why the heck am I still stuck with this idea of future proofing! No idea! But anyways, to get back on topic, YAGNI seemed useful in not just my coding, but everywhere, so I started to look more into it! In my search for a grander understanding, I discovered this post by Martin Fowler. And boy, oh boy, did Martin teach me a lot of things!

Firstly, it makes sense! The concept of future proofing, or preparing for something that you will eventually need seems sound, right? But, things change! Especially when it comes to specifications and needs. Things might seem set in stone one day, but in the next, something might have changed and the requirements become different! Any and all investment and work you put into that ‘feature’ will be wasted, so don’t think about it until you need it!

In my own cases, I can definitely say if I just had a more YAGNI approach, I’d save myself countless hours and at least a small pile of money! Plans are plans are a reason! Just because I think I’ll want it later, doesn’t mean I’ll actually want it when the time comes!

That computer I future proofed? Never did it!
That space I cleared out for an expansion in Minecraft? Never expanded it!
That one ingredient I picked up? Never used it!

These have just been my experiences, but I am certain that at least some people can understand where I’m coming from. It really does seem to make sense to prepare for the future, right? But if you’re really preparing for the future, you better make sure you absolutely are going to use it! This is certainly a lot easier for real life things, but when it comes to coding and meeting specifications, that is just so much harder! You probably have about the same chance of looking into a crystal ball and reading your own future as you do thinking a feature will be needed! So…

Don’t do it! You ain’t going to need it!

From the blog CS@Worcester – Bored Coding by iisbor and used with permission of the author. All other rights reserved by the author.

Getting better and better

As we have been Progressing through the weeks and we work on the pogils as well as the homework I find myself slowly getting the hang of the work flow as well as utilizing VSCode, I for once understand what is going on in the classes and am eager to continue to learn as it finally is being pieced together in my head. the biggest part would probably have to be the feedback that I receive and probably mostly everyone else gets as well its not some random jargon about how something is wrong but it seems like a guiding hand to get you to the solution that you aren’t to far from. the biggest take away I have gotten as organizing the UML diagrams on the Homework really connected the dots on how these classes and models work together better than it ever has , I can understand paths a little better than I did before.

I only hope I continue to understand the work I continue to do, and im eager to see what work I can get done.

From the blog cs@worcester – Marels Blog by mbeqo and used with permission of the author. All other rights reserved by the author.

YAGNI!

While looking through my blogs, I came across a familiar acronym that I used all the time when it comes to developing software and system. The acronym is called “YAGNI”, which stands for “You Ain’t Gonna Need It” according to the blog “Automation Principles – YAGNI/Premature Optimization, It’s the principle of extreme programming that states a programmer should not add functionality until deemed necessary. The blog takes about how many engineers will spend multiple hours trying to build the “right system” the first time. In some cases, trying to build a flawless system in the first go can be rather difficult to achieve. The problem is that programmers spend too much time worrying about efficiency in the wrong places and having that premature optimization can cause more harm than good. The blog goes over Big- O notation which explains that it does not care about constants but the long-term growth rate of functions. This is a good rule to consider because having to introduce something before a fraction of the code is even written can make a program a lot more difficult to support as explained, it would increase design considerations, the likelihood of race condition, and the ability to troubleshoot. Optimizing certain processes might not lead to any time savings or real optimization. In fact, it could do the exact opposite, a good example that the blog states are when using Python, constructing lambdas and list comprehensions over simple for loops. The blogger has mentioned that in his personal experience he would add non-functional requirements, such as authentication and logging, too early, adding features before needed. With that being said, I remember spending so much time on adding the ability to connect my bank to my finance application, that I didn’t have the time to code the application itself. The blogger talks about network automation which explains more about how networking is all about speed and not creating YAGNI isn’t really in the cards. They would go into detail about real-world examples when it comes to the network automation process, explaining issues about multithreading, in which he explains that overloading the TACACS server with too many requests at once is very problematic, or scaling wide too fast can cause processes to slow down and too much resource utilization, overall, it’s very inefficient. Configuration Generation takes too long and is very inefficient, and with all these in mind, the blogger isn’t trying to not consider tomorrow’s problems but is more in line with building things up as they go.

“Automation Principles – YAGNI / Premature Optimizations” :

https://blog.networktocode.com/post/Principle-YAGNI/

From the blog CS@Worcester – FindKelvin by Kelvin Nina and used with permission of the author. All other rights reserved by the author.

Semantic Versioning (There’s a standard?!?!)

Being the person that I am, I made my own rules without being influenced by anyone else. For me, I simply though that

Well… It’s my code, right? That means I can name things however I want, right? So if I said this is version 0.9, then it’s version 0.9! Or if this is version 2.0, then it’s 2.0!

In response, my friend informed me of a few soft rules, since he was more familiar with versioning through an internship opportunity than I was, and he shed some light on a few details. Like how versioning is not just a made up number that you assign to your project/code. Hearing this, I was honestly pretty confused, but I looked into it and discovered Semantic Versioning!

And wow, did that blow me away. Looking back at the way I approached things, I was no different from a headless chicken running around! Learning about Semantic Versioning, through this video, it is both so straightforward but also so useful. No longer will I have my versions be numbered willy-nilly, nor will I even forget my own versions and what happened in this update! It’ll finally make sense!

To break down Semantic Versioning, consider versions as three parts!

The Major version, so think 1.0.0
The Minor version, such as 0.1.0
And finally, patches, like so 0.0.1

Major versions are, well, major! These are versions that leap forward and have large changes! Making a major version usually implies that the code is changed in such a way that it will no longer work with older versions. These changes might be large architectural changes, or other large scale changes that usually break the existing structures or APIs.

Minor versions are NOT as massive (duh), but they come with plenty of other things compared to, say, patches! Minor versions could be added features, or updates to existing features, and most of the time, nothing should break. Functionality that existing before should still work, and further development of new features shouldn’t impact or break anything either (ideally).

Patches are like bandages! These are where the bug fixes are! If something is broken in an unintended way, that’s when patches come in, and they should be used to iron out problems that were not accounted for. It could be something minor like the output of some code is a 2 instead of a 1, or it could be a bit more major like a whole portion of the code crashes when run, but patches are for fixes!

As an example, I’ll run through a brief bit of versioning on a made up project! Let’s say… I’m making a program to print out a leaderboard. It connects to an online library to gather scores, and then displays it to whoever connects to it.

After long and grueling work, I did it! I finally completed it and so, I release…
Version 1.0.0
It’s done and completed, but then… It turns out, my leaderboard only prints out the WORST scores! Dang it, I flipped the list around by mistake! So off I go, fixing this bug. After everything is working again, my next version should be 1.0.1.
But then… I think to myself, wouldn’t it be cool if I added region support too? Then we could see where this score is from, and potentially even group regions by high scores!
So, after adding this new feature, my next version will be 1.1.0!
Pretty straightforward, right? It’s really clear what is going on too!
Let’s say another bug is introduce, bam, 1.1.1!
Then more features, 1.2.0
And finally another bug is squashed! 1.2.1
And… another bug ! 1.2.2
But… then wait, there’s another bug. 1.2.3
Things feel good and everything works, but I learn about a whole new library that does what I want it to do, and gives me more room for growth. So rework the whole program with this new library! The old stuff still works, but since I’m switching to a whole new system, this version will no longer work with the old one, and now, this is when I jump a major version.
We’re now on version 2.0.0!

I hope that example made sense! There is more to semantic versioning as well, but that is the quick break down. Introducing things like alpha/beta and also ^ and ~ adds some other intricacies, but for now… Let’s not think about them! Alphas/betas are optional for testing purposes, and to be honest, ^ and ~ are a bit out of my grasp. I don’t fully understand them yet! I’ll look into them more, and maybe I’ll have a future update post detailing what I’ve discovered!

From the blog CS@Worcester – Bored Coding by iisbor and used with permission of the author. All other rights reserved by the author.

UML Diagrams Are Amazing!

These past few weeks, I’ve been getting myself refamiliarized with UML Diagrams. These diagrams have made a frequent appearance in my CS career. From my complete understanding, they are a great way of analyzing one’s code from the top down. At first, I thought it was just another hassle. Some of these UML Diagrams can get rather difficult to understand, and with all terminology and ways to draw out these charts, it can get pretty hectic to understand how the code works. I learned to take in the information given one at a time. In the blog “Types of UML Diagrams” by Lucid Content Team, it explains that when it comes to any formal code training, UML diagrams are essential but take some time to build and become really out of date fairly quickly, in an Agile environment. But they are very useful for quick visual documentation so that employees can give stockholders a quick overview of the system so developers don’t waste time in meetings.

UML stands for Unified Modeling Language, which is a way to visually represent the architecture, design, and implementation of complex software systems. It is supposed to keep track of the relationships and hierarchies within a software system. It’s hard enough to keep track of thousands of lines of code and so the UML diagram is supposed to keep track of all these components of the software. UML diagrams can be used with basically any programming language and so all software developers should be able to understand it. UML diagrams keep things productive and focused, and they are very helpful to engineering teams. This can include bringing in new team members or developers up to speed, source code navigation, and planning out new features before programming them, and it helps communications between a non-technical audience more easily- which means that most people will be able to understand the process regardless of programming experience.

There are many types of UML diagrams. The first is structural UML diagrams, which show how the system is structured, with classes, objects, packages, and the relationships between them. The component diagram is a more specialized version of the class diagram, which breaks a complex system down into smaller components and will visualized the relationship between the components. Deployment diagrams show how software is deployed on hardware components in a system. Composite structure diagrams are essentially blueprints for the internal structure of a classifier. Object diagrams show examples of data structures at a specific time. And package diagrams are used to show dependencies between different packages. Obviously, this doesn’t even cover half of the UML diagram spectrums since we didn’t even Behavioral UML diagrams which are used to visualize how the system behaves and interacts with itself and other systems.

I’ve come to realize that UML diagrams can be very useful, it’s important to read code from the source but that can be rather time-consuming sometimes. UML diagram is a lot easier to take in and can explain how the software works in just minutes. In my future projects, I want to be able to utilize UML diagrams so that I can better explain my own work to others. I feel it would have been very easy to explain my past projects to people if I was able to have one. The blog was quite interesting because it explained the many types of UML diagrams that exist and their practical uses.

Link to “Types Of UML Diagrams” by Lucid Content Team: https://www.lucidchart.com/blog/types-of-UML-diagrams

From the blog CS@Worcester – FindKelvin by Kelvin Nina and used with permission of the author. All other rights reserved by the author.