Category Archives: Week 4

The Singleton Design Pattern

Recently in CS-343, I have been introduced to the concept of
design patterns. These are essentially code-writing techniques that help make
code easier to describe, modify and maintain. There are a wide variety of design
patterns, each with different benefits and drawbacks. My last class on Thursday
ended as we began to cover the Singleton Pattern, and so I decided I would look
into Singleton in advance of our activities next class about it. My research
led me to Andrew Powell-Morse’s blog post “Creational Design Patterns:
Singleton,” which can be found here:

https://airbrake.io/blog/design-patterns/creational-design-patterns-singleton

This post, as you may expect, is focused on explaining the
Singleton pattern to the reader. Powell-Morse accomplishes this by using real-world
analogies to describe the concept of Singleton and a programming example to show
how the pattern can be implemented in code. I chose to write about this blog
not only because it explains Singleton well, but also because I found the
programming example interesting. The purpose of the Singleton pattern is to
ensure only a single instance of certain classes can exist at a time, which Powell-Morse
clearly states right at the start of the blog. This is a simple enough concept
to grasp, but Powell-Morse elaborates further by explaining that Singleton
should be thought of in terms of uniqueness rather than singularity. He uses real-world
examples of uniqueness, those being individuality in people and a unique deck
of cards used for poker games, to describe situations in which Singleton can be
useful. These examples, especially the deck of cards, have helped me understand
that Singleton is useful in programs that only require a single instance of a
class, and I could definitely see myself applying this concept in future
projects to help reduce memory usage from unnecessary objects.

Since I found the deck of cards analogy especially helpful, I was pleased to discover that it was the focus of Powell-Morse’s programming example. However, the example’s complexity made it somewhat difficult to understand at first. Instead of simply demonstrating how Singleton is coded, Powell-Morse applies the concept to a program using multiple threads that separately remove card from the same deck instance. I have not written many programs myself that use multi-threading, and this lack of experience made the example confusing initially. The example is also written in C#, which is a language I am not nearly as experienced in as I am with Java. Despite my initial confusion, I eventually understood the example and grew to appreciate its complexity. The use of multi-threading in the example helped demonstrate a major drawback of Singleton and how to work around it in C#. The example not only taught me how to implement Singleton into my future coding projects using static variables, but it also showed me how to work around Singleton’s issues with multi-threading. This blog post also taught me more about a language that isn’t Java, which is always welcome.

From the blog CS@Worcester – Computer Science with Kyle Q by kylequad and used with permission of the author. All other rights reserved by the author.

Putting the ‘O’ in SOLID


We
covered the Open/Close principle in a recent lab, and it prompted a desire to
cover some of the SOLID principles. I have determined however, that the length
of this blog is short enough that I should dedicate it to a single principle at
a time, beginning with the aforementioned Open/Close principle.

              This particular object-oriented
principle was outlined by Bertrand Meyer and states:


“…entities should be open for extension, but closed for modification.”


              This of course, is a complex way
of expressing a simple guideline for software development; which is that new
functionality should be able to added without having to radically change existing
code. The operative word, extension, is illustrative. If your code was a home,
and you wanted to add more square-footage (read: functionality), you shouldn’t
need to knock all the walls, or the whole thing, down to add a bathroom.

              In the blog chosen, the author summarizes
a talk he gave about the very subject. In it, he provides an example of a program
he is developing for a company that calculates the total area of a set of rectangles.
As the customer requests more and more shapes be added to the calculator, the original
code changes drastically and gets longer and longer. This modification is opposed
to this principle. In opposition, creating a Shape class that contains many children
of different specific shapes – each with their own area function – with the
ability to add more, eliminates constant modification of the main class, and
allows for constant extensionability in the form of new shapes.

              To say simply that code should be
modular is unhelpful, as it is broad and the general definition of so many more
good coding practices. Our class and homework provide another perfect example.
Why should we have to constantly Override and rewrite portions of a superclass’
methods in each of its child classes to achieve proper functionality. Instead, in
making a Quack/Fly Behavior interface we have established a broad mold that
many new behaviors can be built off of; access to all of which is then granted
by the relationship to the single respective behavior interface.

              Therefore, the ability of code to be
extended with new functionalities, using a single reference – in this case the behaviors,
or shapes – instead of needing to rewrite or overwrite code is what is meant by
extension; while keeping the existing code from needing constant revisions, is being
closed for modification. Like the house example earlier, you should constantly be
building out, not renovating what exists already.


Sources:

A
simple example of the Open/Closed Principle

From the blog CS@Worcester – Press Here for Worms by wurmpress and used with permission of the author. All other rights reserved by the author.

The KISS and RoP Trade-off

I’ve discussed trade-offs in software before. The more you delve into the world of design and engineering, the more evident it becomes that trade-offs are an underlying principle to rule all principles. There is no perfect software; there is no perfect solution. My aim to strive for the best software, however, led to me choose this topic to discuss.

I intended to keep it simple and focus on KISS (Keep it Simple, Stupid), which claims that a simple, straightforward solution is the better solution. But after listening to a podcast about design principles by SoftwareArchitekTOUR*, I was forced to make a trade-off between a simple blog post and a more powerful, generic one. Specifically, they mentioned the RoP (Rule of Power) principle or GP (generalization principle), which states “that a more generic solution is better than a specific one”.

You will never produce a perfect design… but you can’t typically have both [KISS and RoP]. Either it is simple and specific, or it’s very generic and flexible and therefore no longer simple but rather has a certain degree of complexity. You can typically only strive for a Pareto-Optimal Solution.”

Christian Rehn (translated from German), SoftwareArchitekTOUR Episode 60, 14:30 – 15:45

Rehn says it’s a balancing act, and neither principle is either right nor wrong. The concept of a Pareto-optimal solution is rather simple: find a solution that is better in at least one aspect, but at least as good in every other aspect. As Rehn says, you have to ask yourself what works best in your situation. What is the best compromise?

So what does this mean when choosing between KISS and RoP? Like any new concept or principle, both should be practiced in isolation, with small projects. It is only with this basic experience that you can learn how to make these decisions in more complicated systems.

In learning design principles, I have refactored code in order to make it adhere to a design pattern or principle. When new situations arise, the old solution may no longer hold water. But each time, the code was kept as simple as possible given the requirements. The Pareto-optimal solution is the simplest solution that provides the required functionality. This leads me to conclude that KISS trumps RoP, especially considering other principles such as YAGNI, which strives to prevent the code smell of unneeded complexity. But on the other hand, the simplest solution in which every class does a single, specific thing, would itself become extremely verbose, violating DRY in particular.

Determining how to make these trade-offs is an art and comes with experience. In the mean time, I plan to keep my code as simple as possible and add complexity and generalizations as needed.

* This podcast is only available in German, but the English show notes explain the topics of the podcast in detail.

From the blog CS@Worcester – Inquiries and Queries by ausausdauer and used with permission of the author. All other rights reserved by the author.

Static VS Dynamic Testing


In Software Testing, the two
most popular or important methods for testing are static and dynamic. While both
are obviously named, its good to go over the distinctions, and more importantly,
the actual implementations of both.

           Static
testing involves review of project documents, including source code, to weed
out any actual or potential security flaws and general mistakes. This takes the
form of Informal and Technical Reviews, Inspections, Walkthroughs, and more. This
process can involve an inspection tool or be performed manually, searching
for potential run-time problems, but not actually running the code. Dynamic testing
conversely involves actually executing code looking for functionality,
resource usage, and performance. We have been using dynamic testing, specifically
unit testing, in our class so far. Dynamic testing is used mainly to verify the
program runs in accordance with the specifications outlined for it, specifically
called System Testing.

           Of
course, both have their own advantages and disadvantages. In static testing you
have the potential to find bugs that may have made later development more
troublesome early in the process, which can save lots of time and frustration in
the future. However, there may exist some flaw that only a running program can
reveal, but it only in the code that is executed. It would seem to me, that
these should not be exclusive and in fact, they have a logical order.

           You
would begin with static testing, like proofreading a paper, searching for misspelled
words (poor variable/method names), run on sentences (needless complexity or
repetition), and reorganizing to best get your point across (logical order of
declarations/methods, documentation). Like writing a paper, it is best to do
this ahead of run-time (reading through), that way you don’t have to constantly
stop to bandage small errors. In the same way it seems essential to ‘proof-read’
your code before execution to make sure it is free from identifiable errors
before you move on to seeking out run-time errors. You wouldn’t want to have to
fix both at the same time like with proof-reading.

           In sum,
these two could be grouped as pro-active or reactive, static and dynamic respectively,
and best explain their use. As mentioned, these also make sense to utilize in a
specific order, ensuring code is as correct as possible upon inspection, then
running it to see where flaws undetectable in a static environment have arisen.
These together ensure quality software that is optimized and secure.

Static Testing vs Dynamic Testing: What’s the Difference?
Static Testing vs. Dynamic Testing

From the blog CS@Worcester – Press Here for Worms by wurmpress and used with permission of the author. All other rights reserved by the author.

Always Use Chap Stick

While the title of this blog post doesn’t seem like it has to do with Software development, trust me it does. I was reading a blog post by Arvind Singh Baghel called “Software Design Principles DRY and KISS” which talks about important design principles and how they are violated and good tips on how to achieve these principles. DRY stands for “Do Not Repeat Yourself” which basically means avoid using the same code or iterations of the same code over and over or in different places since it wastes time for the people reading it and the software itself and creates unnecessary complexity. KISS stands for “Keep It Simple Stupid” which in terms of software development means keeping methods small and avoiding complexity. I chose this blog post because it lays of the important principles that keep designs of code simple and easy for later changes or if they are read by another person which is something that I feel I want to work on being better about. Looking back at a lot of my code, I rarely kept my code simple or avoided repeating myself for simplicity and time saving, but that would end up wasting more time later when I would need to go back and make changes or trace my code. The code that I would make would have methods that would use the exact same lines of code with minor changes or no changes at all, which made following the code that much harder since I had to look at each implementation of the same code and see what was slightly different. I would also try to fit as many different things into a single method as possible which would create extremely long methods. When methods would fail test cases these long methods of repeated code would make it much more difficult to figure out where the real issue was and took more time than it would have if I decided to split up the methods and not reuse code throughout the program. Simplicity seems to be a major key in any good design regardless of what you are designing and these rules of DRY and KISS would have saved me a lot of time on projects and coding assignments had I applied them and not tried to take the easy way out. I already have worked on applying this to my current projects and it has made a huge difference in both my experience with coding and the time and energy it takes to complete a project.

Link to blog post mentioned: https://programingthoughts.wordpress.com/2018/04/15/software-design-principles-dry-and-kiss/

From the blog CS@Worcester – Tyler Quist’s CS Blog by Tyler Quist and used with permission of the author. All other rights reserved by the author.

What’s that Smell

When you think of the word smell, your first thought
is probably not computer design. Design smells are a vital part of designing
clean, readable and re-usable code. In total there are 7 main design smells.
They are rigidity, fragility, immobility, viscosity, needless repetition,
needless complexity and opacity. The reason they are called smells is because when
there is a “high smell” of one of these terms it means there is probably poor
code. According to Wikipedia, the origin of the term design smell was originally
from code smell which was in Martin Fowler’s book Refactoring: Improving the Design of Existing Code.

(https://en.wikipedia.org/wiki/Design_smell)

In this book the term code smell was essentially the
same as a high smell/bad smell would indicate a deeper problem in the code,
rather than what was on the surface.

The first design smell is rigidity. When code is has
high rigidity, it means there is often a lot of code that is hard-coded in. This
causes the code to become rigid because small changes can cause a much larger
impact through-out. In order to avoid a high rigidity smell, code should be
dynamic and able to fit the system it is in. If there is too much hard-coding
it will be very tough to make changes later on.

The next design smell is fragility, and this is
similar to rigidity with a few exceptions. Fragility is the tendency of code to
break when a small change is made, similar to rigidity. The main difference between
these two is with fragility, the areas that suffer are often unrelated to the
changes made. With rigidity, the changes directly affect the code related.

The third design smell is immobility. For a lot of
code to be useful, it needs to be able to work with other systems as well. With
immobility, the code or program could be beneficial to other areas but there is
too much effort/risk to be able to successfully integrate it.

The fourth design smell is viscosity which means that
code can become “thick” and hard to work with. When you are designing a
program, you want to be able to modify it along with the design, but “hacks”
can be done to quickly fix the issue. The problem is that the more hacks in the
program, the less the program sticks to its design. This cause a lot of extra
code and the program becomes highly viscous. It is important to know the impact
of these small hacks and how to limit them.

The next design smell is needless repetition, which is
pretty straight forward. In a program, you do not want to have to repeat the
same code over and over. Instead find ways to integrate and abstract the code
for re-usability.

The next design smell is needless complexity, again,
pretty straight forward. This can happen when a program includes sections that
are not yet utilized but “could be”. That extra code is clogging space and has
no benefit.

The last design smell is opacity. Opacity simply means
to become less clear. This can happen when there are many changes to a program
but no effort to keep the code clean and organized. Eventually the code will
become foreign to all that had worked on it before without proper care.

Keep all these design smells in the back of your mind
next project!

From the blog CS@Worcester – Journey Through Technology by krothermich and used with permission of the author. All other rights reserved by the author.

Refactoring the Features Table and Beginning GitLab Feature Testing

I started last week by meeting with Dr. Wurst (my project advisor) to go over what I had done up until that point and to figure out what I should be doing going forward with my research. We mainly discussed the new proposed candidate workflow that was created during the LibreFoodPantry (LFP) Retreat after I had left. Most of that discussion was about the two main roles in the diagram, the trustees and the shop managers. It was decided that the trustees would be the people who maintain the LibreFoodPantry project and that the shop managers would be instructors who are teaching a course and developing for the project for a semester. The shop managers would fork a copy of the repository they are working on into their own “shop” for their class where it would be worked on by student shop developers (another user role). Finally there is a client / user role that has basic access to the LFP group. We also discussed the GitLab permissions for each of these roles, which is different depending on which GitLab group we are looking at as there is different permissions for the same role in the LFP group and the shop group. We decided that I should be building this workflow out in all 3 platforms (GitHub Free, GitLab Free, and GitLab Gold) and document the process and any issues. We anticipated issues trying to implement this in GitHub as it doesn’t have the same permission levels as GitLab which is what the workflow was created with. I also needed to update the features table as we previously discussed the previous week so that all the features in the same row are related across platforms and that they use the specific naming that the platforms use. Finally, I was to try and create a simple GitLab repository and implement GitLab’s CI in a simple Java Gradle project to see how this worked. We also decided that we would have regular meetings every week on Tuesday for the rest of the summer.

On Wednesday I created a new table using Google Sheets as I found that the rows align much better in a spreadsheet than a Google Docs table. I copied and pasted all of the bullet points from the 3 platforms from the old table into the new one. I then rearranged them in color order. I finally went through each one and started renaming the features to be more consistent with how its respective website names it in its support or documentation pages.

Thursday I finished up with the table and it was nearly done except for a few little questions for Dr. Wurst in our next meeting. I also created a digital version of the proposed candidate workflow diagram using Draw.io as I wanted a cleaner looking version instead of the whiteboard picture from Google Drive. I emailed Dr. Wurst the diagram to check if I created it correctly and had a little question about why one of the branch symbols was located in the diagram. I finished Thursday by creating a GitLab Gold repository and made a GitLab CI configuration file. I cloned it to my desktop and converted it to a Gradle project the way we did this spring in CS348 using some documentation from the course’s Blackboard page. I ran into a problem that others in my class had previously with Gradle not working correctly on Windows.

Friday I fixed the Gradle not working issue using a page from stackoverflow. I then got GitLab CI to work on this project, as it was previously failing due to a fix I implemented to try and get Gradle to work. I found that GitLab’s website provides great documentation to show exactly where the failure occurs in a CI pipeline. On Friday I also received an email from Dr. Wurst that forwarded a document from another professor who had a student that also researched GitHub vs. GitLab. I looked at the document and it compared what they thought were the most important features, some of which I hadn’t seen yet so I looked at them on GitHub and GitLab’s websites and added them into my features table. I did find this document to be helpful as it contained new information I hadn’t come across yet. I decided that the table was done and moved on to testing the candidate workflow. I started this by creating 4 Google Accounts, 1 for the shop developer, 2 for the student developers (so I can have two student local repositories to push and pull from) and 1 client / guest account. I documented the usernames and passwords in a Google Sheet so that others in the LFP group can use them. I then created GitLab accounts for these test accounts and added them to the testing group we created in GitLab Gold (I used my account to add them to the LFP test group and the shop manager to add to the class shop group to simulate how it would actually be done with the trustee and shop manager roles). I am using the test group we created in GitLab Gold as the LFP group and I made a sub-group in this that acts as the shop group. One problem I noticed with this is that if we are going to create sub-groups for the shop classes within the LFP group on GitLab, there would be permissions issues with trustees having access to the shop which would be a course taught at another institution. I used my GitLab account for the role of trustee since I was an owner in the testing group. I then created a test repository in the test LFP group under my account and forked it into the shop with the shop manager account. At this point I stopped for the week as I was unsure of how to proceed with testing the workflow since it involves multiple shop developers cloning, pushing, and pulling to their local computers. This would mean I need multiple local GitLab repositories with different user accounts and I wanted to ask Dr. Wurst the best way to do this, I thought that creating a VM for each account would be the best way of doing this.  

From the blog CS@Worcester – Chris' Computer Science Blog by cradkowski and used with permission of the author. All other rights reserved by the author.

Reading List

Hi dear readers. Welcome back/all to my next blog post about the next Pattern. The next Pattern I am going to write today is about ‘Reading List’. Yep you heard it right! Reading List! Just because you are a developer doesn’t mean that you got no more reading to do.

This Pattern in the book talks about how to start with a reading list and how to keep up with it. The authors make very good points when they say that its hard to even start a reading list as you might not be sure what to read first. I like how one of the recommendations is to ask your mentor/professor. They would be the people who know your development level better than anyone and will definitely be a great help. Another point to keep in mind is that the initial Reading List you have created might (probably will) change after you have finished one of them. Having a Reading List is great because it also makes you reflect about what you read previously.

As mentioned in the book, we should always think and analyze about the book we just read and try to figure out on our own on what should be the next book to read. Computer Science is one of these field that new things will  always keep coming up and what better than books will be describing these new innovations. Indeed these new books should be on top of the list.

I believe that there will be cases where you/me as a reader will be disappointed on one or more books. It’s not right that just because one book was not good enough, we should stop reading other books. I think that when you don’t find a book interesting or good enough, it’s a good sign as you are thinking about the book content. This will give you a better understanding on what you would like to read next and what is your favorite area. For example, for my reading lots of books about programming languages wouldn’t be too much interesting. On the other hand there are people who love reading each and every book about programming languages and this is because they’re passionate about it.

I would recommend to just start somewhere, and then you will be able to figure out on what to jump on next.

From the blog CS@Worcester – Danja's Blog by danja9 and used with permission of the author. All other rights reserved by the author.

Apprenticeship Patterns – Share What You Learn (week-4)

On my readings on the patterns, “share what you learn”. By teaching what you already know about your profession, you’ll learn more, make new connections with people, and find new opportunities. And by doing that you will may find it very fun.

From the reading, to my understanding, another reason you shouldn’t wait to start sharing with others is that it will help you learn. Research has shown that when we explain something to other people, we come to understand it better ourselves. The process of teaching or sharing what we have learned to others, helps us recognize gaps in our own understanding and better organize information in our minds.

Also one thing i notice about sharing what you learn with others will give your talents more exposure, thus giving the people you interact with the opportunity to identify you as a valuable expert. Helping others can help you build your reputation. Also sharing what you learn can be a great tool for everyone. All you need to do is to be permanently connected to the hot business topics and offer your expertise every time you can. When people are open to prove their value through their competence, it’s easier to notice the ones likely to organize people and to take initiative.

Let’s take for instance, you are in the team, by sharing  what you learn or known and talking about certain decisions and procedures, the new guys or juniors could easily acquire new sets of skills. Create an environment where everybody is encouraged to ask questions, and help professionals in all your locations and job positions stay updated with the latest information in their field. Also by sharing what you learn, increases the productivity of your team. You can work faster and smarter, as you get easier access to the internal resources and expertise within your organization. Projects don’t get delayed, people swimmingly get the information they need in order to do their jobs and your business fills the bill.

I really enjoyed reading the pattern because i got the some main importance of sharing, strategic ways of sharing and things that i do not have to do.

From the blog CS@worcester – Site Title by Derek Odame and used with permission of the author. All other rights reserved by the author.

record what you learn

Hi everyone and welcome to another CS 448 Software Development Capstone blog. Today blog topic is about one of the individual apprenticeship patterns which happen to record what you learn. I wanted to read more about this pattern because I can relate. Ever since last semester I been writing many blogs and didn’t fully understand why I was posting them. At first, I thought this method of recording what I learn was to help us research more information and writing them as blogs would help us understand the material better. After reading this, I realize that it’s just more than blogging once per week. I understand that recording our journey helps keep vital resources, makes our journey explicit, and can be helpful to many others. I feel like after reading this individual apprenticeship pattern I will change the way I work because all this time I was blogging, I didn’t really blog for purpose and just blog because it was assigned so I think I will be more efficient when I blog now. I also picked up a new idea from reading this pattern which is to have two journal that is private and public. I can use the public for sharing what I have learned and gaining feedback and the private one for me, to be honest with my status in programming. I do agree that it best to have both because of the perks that each offer. I like when the record what you learn pattern state that Dave was constantly posting his journey for years and eventually, he had tons of resources that he then later uses later in his career. The reason why I like this statement is that it makes me more motivated when posting these blog assignments because I know that eventually, it will come in handy one day in my career. All in all, I felt like this individual apprenticeship pattern, record what you learn is very informative. I learned that every apprentice should keep a journal for their journey and try to write every day as one day it will help others and even help themselves.

From the blog CS@Worcester – Phan's CS by phancs and used with permission of the author. All other rights reserved by the author.