Category Archives: Week 4

Dig Deeper Pattern

This pattern focuses on being able to decompose problems entirely and understand each part thoroughly. Many times, in the work force, people will only have enough approximate knowledge to help them get through their days, but when problems arise, they become useless. It is extremely important to be able to take all parts of a problem and truly understand them. To be able to decompose these large problems, it is important to study and understand every part that goes into it. While doing this however, it is important not to become too narrowly focused on one idea.

I connected with this pattern a lot due to the fact that many times when working on a project, I will run into an issue that I can provide a “hacky” fix to by reading it online, but I never acquire the knowledge of that topic specifically. I often times make the excuse that school causes timelines and completion of projects that are often unreasonable to be able to fully delve into the project. However, I need to start taking these problems apart piece by piece in order to start fully learning anything I come into contact with. If I am able to start with small sections of material, eventually there will be overlaps in knowledge that will build on top of each other until the projects that I’m faced with, now appear to be far easier. It is often overwhelming to start off researching into an entire idea of a project but being able to take small slices of it will make it far more manageable.

I enjoyed in this pattern how they discussed that often times there is a more knowledgeable person in order to fill in the gaps that you may have. In the workforce this can be a co-worker or a higher-up, but often times it is the teacher in school. Being able to create the knowledge for oneself is far superior in the long run than consistently going to the person that may have a better understanding. In order for anyone to reach that status of knowledge, they all took their time to break down and understand individual components of any problem they run into.

From the blog CS@Worcester – Journey Through Technology by krothermich and used with permission of the author. All other rights reserved by the author.

Kindred Spirits: Programming inspired by Anne of Green Gables

When reading the kindred spirits apprenticeship pattern, I was immediately reminded of my first real programming class, CS140, wherein I met some of the friendly faces I would spend the next two years becoming a programmer with. A few of these faces were a part of my group in that class’s lab and with them we challenged each other in the subject and helped fill each other’s blind spots. While this is hardly unique, I’m sure, it exemplifies what the pattern is all about: finding peers who you can learn and grow with. Now with those two years nearly behind us and the end (of our education) is in sight, those same friends have been amazing resources.

As new learners, we tend to go in all different directions, not favoring any set method because we have yet to find the path of least resistance or best practices. Subsequently, we create puzzles pieces with our bits of knowledge that we all can combine to get a clear picture of a language, framework, et cetera. I have found as well, that what was often very useful was the ability to politely disagree – read: bicker like an old couple – with your peer and push back and forth on each other to really challenge one’s knowledge or understanding of something. It was through this back and forth that many of the solutions we were so desperately searching for came about; which is something the pattern even addresses, stating that it is imperative to avoid group think and challenge each other.

I remember distinctly have an erratic mass of code segments and illustrations of a particular data structure, the exact one I cannot remember, all over the board in the Science and Technology center study rooms. Data Structures, our major’s trial by fire, has crushed fledgling programmers of far greater skill than me. I can say with no doubt, that had it not been for these brain-storming sessions I would not have passed that class or even completely the projects thoroughly in the way that I eventually did. As I look forward to internships and even jobs after graduation, I hope to keep those around me who I have learned so much from and find new kindred spirits to take on this next chapter of my programming career.

From the blog CS@Worcester – Press Here for Worms by wurmpress and used with permission of the author. All other rights reserved by the author.

Record what you learn

“You should not also underestimate the power of writing itself…You can lose your larger sense of purpose. But writing lets you step back and think through a problem. Even the angriest rant forces the writer to achieve a degree of thoughtfulness.”

Atul Gawande, Better

Record what you learn has the purpose to make everyone able to memorize things for a very long time. Using this pattern, it’s not just about writing down notes in a paper and then forget about it. It is about writing down the information you need to learn and read it continuously to keep that information stored in your brain. Keeping a record of your lessons to a blog or personal wiki, it is not a bad idea at all. You decide whom you’re going to share it with, or not share it at all. As long as you write down your notes and get back to them to read or even to make improvements, your brain will be in the perfect shape of memorizing things.

This pattern made me realize that taking notes and writing the most important parts of the lesson is not the end of the learning process. Most of the time when I have to learn something new, I write pages and pages that never end, read them, and then I write a summary without looking at the notes. The problem with this is that after I am done with that lesson, I don’t really go back and read those notes, nor the summary. I wonder if I still have those notes?!

I found this pattern very useful and interesting in my career. Starting from today, I will keep track of my writings, add more details to them, and stick the date they get written, so that when I go back, I would find very organized writing that would make me go back to it over and over again. There is no better thing as reading constantly and memorizing useful information. Another important part of this pattern is that when you go back to read you realize how much things have evolved for a specific topic, and you ask yourself if you should update the original or not? In my opinion, the same as everything nowadays gets updated, writings need an update as well. As a conclusion, keep a record of your writings, the same way you keep a record of bank transactions…

From the blog CS@Worcester – Gloris's Blog by Gloris Pina and used with permission of the author. All other rights reserved by the author.

Singleton…

This week was all about design
patterns and ways to implement them. We have only just begun and to start with
we talked about Singleton. It is a design pattern that creates a single
instance of a class to be used from a well-known access point. I had some previous
interaction with this particular design pattern at my work, when we were using
the C++ language this was one of the ways we got around some of the problems we
had, since moving to C# it went away but still I knew about singleton before
this weeks class at school. I was a little bit curious and wanted to read about
it more, so I looked up blog post by Ted Neward called, surprise, surprise: “Singleton”.
Link to it is here.

                In this
blog Ted talks about it extensively in my opinion, from the context of it, the
problem that it solves and the consequences of the Singleton implementation. I definitely
like the information provided and it help me expand my knowledge of the design
pattern, which is a good extension on my current knowledge both from work and
school. One of the parts in the blog really helped me understand it better and
that is: “Reduced name space. “Singleton” is just “global” hiding behind
another name. One of the explicit goals (in 1995) was to be able to have the
necessary scope-wide state, but without accidentally clashing over names in
that global namespace. Languages which support explicit namespacing (Java, C#,
C++, Swift, yeah pretty much all of them) mean that we can have this benefit
even without doing anything more than moving the global variable into one of
those namespacing mechanisms.” This describes Singleton as global and I like
this description because it really drove home what Singleton is and how to use
it, I think….

                Overall,
I know I will be learning about many other design patterns and implementations
but for now having this is good enough, who knows maybe others will be a better
solution than Singleton, but I think it is the simplest one to learn. Neward
mention couple of time the debate of Singleton vs statics and that is something
interesting on its own. Something that I will research a little bit more when
time allows. Until then I remember one thing my boss said about singletons back
in the day: “Singletons were used everywhere, and somehow it worked, we didn’t
know why, but it did.”

From the blog #CS@Worcester – Pawel’s CS Experience by Pawel Stypulkowski and used with permission of the author. All other rights reserved by the author.

The Singleton Design Pattern

Recently in CS-343, I have been introduced to the concept of
design patterns. These are essentially code-writing techniques that help make
code easier to describe, modify and maintain. There are a wide variety of design
patterns, each with different benefits and drawbacks. My last class on Thursday
ended as we began to cover the Singleton Pattern, and so I decided I would look
into Singleton in advance of our activities next class about it. My research
led me to Andrew Powell-Morse’s blog post “Creational Design Patterns:
Singleton,” which can be found here:

https://airbrake.io/blog/design-patterns/creational-design-patterns-singleton

This post, as you may expect, is focused on explaining the
Singleton pattern to the reader. Powell-Morse accomplishes this by using real-world
analogies to describe the concept of Singleton and a programming example to show
how the pattern can be implemented in code. I chose to write about this blog
not only because it explains Singleton well, but also because I found the
programming example interesting. The purpose of the Singleton pattern is to
ensure only a single instance of certain classes can exist at a time, which Powell-Morse
clearly states right at the start of the blog. This is a simple enough concept
to grasp, but Powell-Morse elaborates further by explaining that Singleton
should be thought of in terms of uniqueness rather than singularity. He uses real-world
examples of uniqueness, those being individuality in people and a unique deck
of cards used for poker games, to describe situations in which Singleton can be
useful. These examples, especially the deck of cards, have helped me understand
that Singleton is useful in programs that only require a single instance of a
class, and I could definitely see myself applying this concept in future
projects to help reduce memory usage from unnecessary objects.

Since I found the deck of cards analogy especially helpful, I was pleased to discover that it was the focus of Powell-Morse’s programming example. However, the example’s complexity made it somewhat difficult to understand at first. Instead of simply demonstrating how Singleton is coded, Powell-Morse applies the concept to a program using multiple threads that separately remove card from the same deck instance. I have not written many programs myself that use multi-threading, and this lack of experience made the example confusing initially. The example is also written in C#, which is a language I am not nearly as experienced in as I am with Java. Despite my initial confusion, I eventually understood the example and grew to appreciate its complexity. The use of multi-threading in the example helped demonstrate a major drawback of Singleton and how to work around it in C#. The example not only taught me how to implement Singleton into my future coding projects using static variables, but it also showed me how to work around Singleton’s issues with multi-threading. This blog post also taught me more about a language that isn’t Java, which is always welcome.

From the blog CS@Worcester – Computer Science with Kyle Q by kylequad and used with permission of the author. All other rights reserved by the author.

Putting the ‘O’ in SOLID


We
covered the Open/Close principle in a recent lab, and it prompted a desire to
cover some of the SOLID principles. I have determined however, that the length
of this blog is short enough that I should dedicate it to a single principle at
a time, beginning with the aforementioned Open/Close principle.

              This particular object-oriented
principle was outlined by Bertrand Meyer and states:


“…entities should be open for extension, but closed for modification.”


              This of course, is a complex way
of expressing a simple guideline for software development; which is that new
functionality should be able to added without having to radically change existing
code. The operative word, extension, is illustrative. If your code was a home,
and you wanted to add more square-footage (read: functionality), you shouldn’t
need to knock all the walls, or the whole thing, down to add a bathroom.

              In the blog chosen, the author summarizes
a talk he gave about the very subject. In it, he provides an example of a program
he is developing for a company that calculates the total area of a set of rectangles.
As the customer requests more and more shapes be added to the calculator, the original
code changes drastically and gets longer and longer. This modification is opposed
to this principle. In opposition, creating a Shape class that contains many children
of different specific shapes – each with their own area function – with the
ability to add more, eliminates constant modification of the main class, and
allows for constant extensionability in the form of new shapes.

              To say simply that code should be
modular is unhelpful, as it is broad and the general definition of so many more
good coding practices. Our class and homework provide another perfect example.
Why should we have to constantly Override and rewrite portions of a superclass’
methods in each of its child classes to achieve proper functionality. Instead, in
making a Quack/Fly Behavior interface we have established a broad mold that
many new behaviors can be built off of; access to all of which is then granted
by the relationship to the single respective behavior interface.

              Therefore, the ability of code to be
extended with new functionalities, using a single reference – in this case the behaviors,
or shapes – instead of needing to rewrite or overwrite code is what is meant by
extension; while keeping the existing code from needing constant revisions, is being
closed for modification. Like the house example earlier, you should constantly be
building out, not renovating what exists already.


Sources:

A
simple example of the Open/Closed Principle

From the blog CS@Worcester – Press Here for Worms by wurmpress and used with permission of the author. All other rights reserved by the author.

The KISS and RoP Trade-off

I’ve discussed trade-offs in software before. The more you delve into the world of design and engineering, the more evident it becomes that trade-offs are an underlying principle to rule all principles. There is no perfect software; there is no perfect solution. My aim to strive for the best software, however, led to me choose this topic to discuss.

I intended to keep it simple and focus on KISS (Keep it Simple, Stupid), which claims that a simple, straightforward solution is the better solution. But after listening to a podcast about design principles by SoftwareArchitekTOUR*, I was forced to make a trade-off between a simple blog post and a more powerful, generic one. Specifically, they mentioned the RoP (Rule of Power) principle or GP (generalization principle), which states “that a more generic solution is better than a specific one”.

You will never produce a perfect design… but you can’t typically have both [KISS and RoP]. Either it is simple and specific, or it’s very generic and flexible and therefore no longer simple but rather has a certain degree of complexity. You can typically only strive for a Pareto-Optimal Solution.”

Christian Rehn (translated from German), SoftwareArchitekTOUR Episode 60, 14:30 – 15:45

Rehn says it’s a balancing act, and neither principle is either right nor wrong. The concept of a Pareto-optimal solution is rather simple: find a solution that is better in at least one aspect, but at least as good in every other aspect. As Rehn says, you have to ask yourself what works best in your situation. What is the best compromise?

So what does this mean when choosing between KISS and RoP? Like any new concept or principle, both should be practiced in isolation, with small projects. It is only with this basic experience that you can learn how to make these decisions in more complicated systems.

In learning design principles, I have refactored code in order to make it adhere to a design pattern or principle. When new situations arise, the old solution may no longer hold water. But each time, the code was kept as simple as possible given the requirements. The Pareto-optimal solution is the simplest solution that provides the required functionality. This leads me to conclude that KISS trumps RoP, especially considering other principles such as YAGNI, which strives to prevent the code smell of unneeded complexity. But on the other hand, the simplest solution in which every class does a single, specific thing, would itself become extremely verbose, violating DRY in particular.

Determining how to make these trade-offs is an art and comes with experience. In the mean time, I plan to keep my code as simple as possible and add complexity and generalizations as needed.

* This podcast is only available in German, but the English show notes explain the topics of the podcast in detail.

From the blog CS@Worcester – Inquiries and Queries by ausausdauer and used with permission of the author. All other rights reserved by the author.

Static VS Dynamic Testing


In Software Testing, the two
most popular or important methods for testing are static and dynamic. While both
are obviously named, its good to go over the distinctions, and more importantly,
the actual implementations of both.

           Static
testing involves review of project documents, including source code, to weed
out any actual or potential security flaws and general mistakes. This takes the
form of Informal and Technical Reviews, Inspections, Walkthroughs, and more. This
process can involve an inspection tool or be performed manually, searching
for potential run-time problems, but not actually running the code. Dynamic testing
conversely involves actually executing code looking for functionality,
resource usage, and performance. We have been using dynamic testing, specifically
unit testing, in our class so far. Dynamic testing is used mainly to verify the
program runs in accordance with the specifications outlined for it, specifically
called System Testing.

           Of
course, both have their own advantages and disadvantages. In static testing you
have the potential to find bugs that may have made later development more
troublesome early in the process, which can save lots of time and frustration in
the future. However, there may exist some flaw that only a running program can
reveal, but it only in the code that is executed. It would seem to me, that
these should not be exclusive and in fact, they have a logical order.

           You
would begin with static testing, like proofreading a paper, searching for misspelled
words (poor variable/method names), run on sentences (needless complexity or
repetition), and reorganizing to best get your point across (logical order of
declarations/methods, documentation). Like writing a paper, it is best to do
this ahead of run-time (reading through), that way you don’t have to constantly
stop to bandage small errors. In the same way it seems essential to ‘proof-read’
your code before execution to make sure it is free from identifiable errors
before you move on to seeking out run-time errors. You wouldn’t want to have to
fix both at the same time like with proof-reading.

           In sum,
these two could be grouped as pro-active or reactive, static and dynamic respectively,
and best explain their use. As mentioned, these also make sense to utilize in a
specific order, ensuring code is as correct as possible upon inspection, then
running it to see where flaws undetectable in a static environment have arisen.
These together ensure quality software that is optimized and secure.

Static Testing vs Dynamic Testing: What’s the Difference?
Static Testing vs. Dynamic Testing

From the blog CS@Worcester – Press Here for Worms by wurmpress and used with permission of the author. All other rights reserved by the author.

Always Use Chap Stick

While the title of this blog post doesn’t seem like it has to do with Software development, trust me it does. I was reading a blog post by Arvind Singh Baghel called “Software Design Principles DRY and KISS” which talks about important design principles and how they are violated and good tips on how to achieve these principles. DRY stands for “Do Not Repeat Yourself” which basically means avoid using the same code or iterations of the same code over and over or in different places since it wastes time for the people reading it and the software itself and creates unnecessary complexity. KISS stands for “Keep It Simple Stupid” which in terms of software development means keeping methods small and avoiding complexity. I chose this blog post because it lays of the important principles that keep designs of code simple and easy for later changes or if they are read by another person which is something that I feel I want to work on being better about. Looking back at a lot of my code, I rarely kept my code simple or avoided repeating myself for simplicity and time saving, but that would end up wasting more time later when I would need to go back and make changes or trace my code. The code that I would make would have methods that would use the exact same lines of code with minor changes or no changes at all, which made following the code that much harder since I had to look at each implementation of the same code and see what was slightly different. I would also try to fit as many different things into a single method as possible which would create extremely long methods. When methods would fail test cases these long methods of repeated code would make it much more difficult to figure out where the real issue was and took more time than it would have if I decided to split up the methods and not reuse code throughout the program. Simplicity seems to be a major key in any good design regardless of what you are designing and these rules of DRY and KISS would have saved me a lot of time on projects and coding assignments had I applied them and not tried to take the easy way out. I already have worked on applying this to my current projects and it has made a huge difference in both my experience with coding and the time and energy it takes to complete a project.

Link to blog post mentioned: https://programingthoughts.wordpress.com/2018/04/15/software-design-principles-dry-and-kiss/

From the blog CS@Worcester – Tyler Quist’s CS Blog by Tyler Quist and used with permission of the author. All other rights reserved by the author.

What’s that Smell

When you think of the word smell, your first thought
is probably not computer design. Design smells are a vital part of designing
clean, readable and re-usable code. In total there are 7 main design smells.
They are rigidity, fragility, immobility, viscosity, needless repetition,
needless complexity and opacity. The reason they are called smells is because when
there is a “high smell” of one of these terms it means there is probably poor
code. According to Wikipedia, the origin of the term design smell was originally
from code smell which was in Martin Fowler’s book Refactoring: Improving the Design of Existing Code.

(https://en.wikipedia.org/wiki/Design_smell)

In this book the term code smell was essentially the
same as a high smell/bad smell would indicate a deeper problem in the code,
rather than what was on the surface.

The first design smell is rigidity. When code is has
high rigidity, it means there is often a lot of code that is hard-coded in. This
causes the code to become rigid because small changes can cause a much larger
impact through-out. In order to avoid a high rigidity smell, code should be
dynamic and able to fit the system it is in. If there is too much hard-coding
it will be very tough to make changes later on.

The next design smell is fragility, and this is
similar to rigidity with a few exceptions. Fragility is the tendency of code to
break when a small change is made, similar to rigidity. The main difference between
these two is with fragility, the areas that suffer are often unrelated to the
changes made. With rigidity, the changes directly affect the code related.

The third design smell is immobility. For a lot of
code to be useful, it needs to be able to work with other systems as well. With
immobility, the code or program could be beneficial to other areas but there is
too much effort/risk to be able to successfully integrate it.

The fourth design smell is viscosity which means that
code can become “thick” and hard to work with. When you are designing a
program, you want to be able to modify it along with the design, but “hacks”
can be done to quickly fix the issue. The problem is that the more hacks in the
program, the less the program sticks to its design. This cause a lot of extra
code and the program becomes highly viscous. It is important to know the impact
of these small hacks and how to limit them.

The next design smell is needless repetition, which is
pretty straight forward. In a program, you do not want to have to repeat the
same code over and over. Instead find ways to integrate and abstract the code
for re-usability.

The next design smell is needless complexity, again,
pretty straight forward. This can happen when a program includes sections that
are not yet utilized but “could be”. That extra code is clogging space and has
no benefit.

The last design smell is opacity. Opacity simply means
to become less clear. This can happen when there are many changes to a program
but no effort to keep the code clean and organized. Eventually the code will
become foreign to all that had worked on it before without proper care.

Keep all these design smells in the back of your mind
next project!

From the blog CS@Worcester – Journey Through Technology by krothermich and used with permission of the author. All other rights reserved by the author.