Monthly Archives: September 2019

Stinky Duck

Stinky Duck – We’ve been doing some fantastic exercises centered around design smells. These are the seven primary design smells: ● Rigidity ● Fragility ● Immobility ● Viscosity ● Needless Complexity ● Needless Repetition ● Opacity I’m not going to waste your time giving you the definitions. Where’s the fun in that, do a little research and find out why your code may smell a little funky. But I do want to talk about ducks. Not just any ducks, stinky ducks. Some of the exercises we’ve been doing have been centered around “The Duck Simulator”. What does a duck do? Well that depends on the duck, right? Mallard Ducks can fly, swim, and quack. But a Decoy Duck does not fly, does not quack, and does not swim. Though Decoy Duck can float. How do ducks relate to Object Oriented Programming? Let’s imagine a Duck as a JAVA class. In our first pass of designing a duck we made a Duck super class with sub-classes of each type of duck. What’s wrong with having a duck super-class and sub-classes for each type of duck? Look back at the seven smells. can you see it? What’s going to happen when we start adding more duck types? We’ll end up with a lot of repetitive code. What’s the quickest way to reduce repetitive code? What about using inheritance? Inheritance will help but now we’re overriding a bunch of methods for the different duck types. Decoy duck doesn’t need a quack method. We would have to override the quack method. In fact for every duck type we would have to override methods that are inherited from the super-class. You know, because they are inherited. We could look at our super class (Duck) and see where we can improve the code. Well look at that, all the duck types have something in common. They exhibit specific behaviors. They fly (or not) and quack (or not). We can take these behaviors and create interfaces. Our goal should be to make the super class fixed but extensible. Now if we keep adding new types of ducks (Ghost ducks and Rocket Ducks anyone?) we will be able to identify additional areas for clean up. See, our simple Duck went from a slight smell to a real funk quickly. The design smells can be reduced to a manageable level by following the prinicples of clean code. We’ll talk more about design smells and clean code next week. #CS@Worcester #CS343

From the blog Home | Michael Duquette by Michael Duquette and used with permission of the author. All other rights reserved by the author.

Moving from Gitlab to Github

Let’s face it, there is no one perfect platform for hosting repositories. This isn’t a debate about whether Github is better than Gitlab. They both have their pluses and minuses. Recently, we had to move a repository from Gitlab to Github. Without using a migration utility we were able to successfully migrate by following a few steps which I will outline below. This is the approach that we utilized and worked for us using these tools: · Gitbash (https://gitforwindows.org/) · Notepad++ (https://notepad-plus-plus.org/downloads) At a high level the steps we followed are (*Note* this a high level overview see the steps below for the details): · Fork and Clone from Gitlab · Move, find and replace all pointers at the file system level · Create an empty repository on Github · Fork and clone the empty repository from Github into a new folder · Copy all the Gitlab project folder to the Github project folder · Push to the Github repository Read the below steps a couple of times. DO NOT just skim them (something I am guilty of doing frequently) but read through them and make sure to pay attention to the NOTES. Steps/Details: To demonstrate the steps taken I’ve created two folders GitLab and GitHub. We’ll start by working in the Gitlab folder first. From your Gitlab dashboard fork and clone the repository into the local GitLab folder. Navigate to the GitLab repository folder locally. The goal here is to go folder by folder and using Notepad++ replace all instances of gitlab with github. Take your time and do each folder individually. Not every file will have a reference to gitlab in it but Notepad++ has a handy replace all feature that will make this short work. As you are traversing the folders rename any folders that are named gitlab to github as well. Once you finish renaming everything open git-bash in the gitlab folder and copy everything to your github folder: CP is the bash copy command -avr translates to -a = preserve attributes, file modes, ownership, timestamps… -v = verbose output -r = copy directories recursively On Github create a new empty repository. Open a new git-bash window from the new github repository folder you copied everything into. To retain your Git history follow the instructions from Github to push an existing repository from the command line (make sure you are in the github repository folder) : NOTE: If you get this error: fatal: remote origin already exists Do not proceed with the git push. Run this from you git-bash session: git remote rm origin If you don’t receive any additional errors re-run the git remote add origin followed by the git push. At this point you have moved your repository to Github. Verify your app is working correctly and if you are getting any errors go back and double check to make sure you changed all the references from gitlab to github everywhere including folder names. #CS@Worcester #CS448

From the blog Michael Duquette by Michael Duquette and used with permission of the author. All other rights reserved by the author.

Stinky Duck

Stinky Duck – We’ve been doing some fantastic exercises centered around design smells. These are the seven primary design smells: ● Rigidity ● Fragility ● Immobility ● Viscosity ● Needless Complexity ● Needless Repetition ● Opacity I’m not going to waste your time giving you the definitions. Where’s the fun in that, do a little research and find out why your code may smell a little funky. But I do want to talk about ducks. Not just any ducks, stinky ducks. Some of the exercises we’ve been doing have been centered around “The Duck Simulator”. What does a duck do? Well that depends on the duck, right? Mallard Ducks can fly, swim, and quack. But a Decoy Duck does not fly, does not quack, and does not swim. Though Decoy Duck can float. How do ducks relate to Object Oriented Programming? Let’s imagine a Duck as a JAVA class. In our first pass of designing a duck we made a Duck super class with sub-classes of each type of duck. What’s wrong with having a duck super-class and sub-classes for each type of duck? Look back at the seven smells. can you see it? What’s going to happen when we start adding more duck types? We’ll end up with a lot of repetitive code. What’s the quickest way to reduce repetitive code? What about using inheritance? Inheritance will help but now we’re overriding a bunch of methods for the different duck types. Decoy duck doesn’t need a quack method. We would have to override the quack method. In fact for every duck type we would have to override methods that are inherited from the super-class. You know, because they are inherited. We could look at our super class (Duck) and see where we can improve the code. Well look at that, all the duck types have something in common. They exhibit specific behaviors. They fly (or not) and quack (or not). We can take these behaviors and create interfaces. Our goal should be to make the super class fixed but extensible. Now if we keep adding new types of ducks (Ghost ducks and Rocket Ducks anyone?) we will be able to identify additional areas for clean up. See, our simple Duck went from a slight smell to a real funk quickly. The design smells can be reduced to a manageable level by following the prinicples of clean code. We’ll talk more about design smells and clean code next week. #CS@Worcester #CS343

From the blog Michael Duquette by Michael Duquette and used with permission of the author. All other rights reserved by the author.

Singleton…

This week was all about design
patterns and ways to implement them. We have only just begun and to start with
we talked about Singleton. It is a design pattern that creates a single
instance of a class to be used from a well-known access point. I had some previous
interaction with this particular design pattern at my work, when we were using
the C++ language this was one of the ways we got around some of the problems we
had, since moving to C# it went away but still I knew about singleton before
this weeks class at school. I was a little bit curious and wanted to read about
it more, so I looked up blog post by Ted Neward called, surprise, surprise: “Singleton”.
Link to it is here.

                In this
blog Ted talks about it extensively in my opinion, from the context of it, the
problem that it solves and the consequences of the Singleton implementation. I definitely
like the information provided and it help me expand my knowledge of the design
pattern, which is a good extension on my current knowledge both from work and
school. One of the parts in the blog really helped me understand it better and
that is: “Reduced name space. “Singleton” is just “global” hiding behind
another name. One of the explicit goals (in 1995) was to be able to have the
necessary scope-wide state, but without accidentally clashing over names in
that global namespace. Languages which support explicit namespacing (Java, C#,
C++, Swift, yeah pretty much all of them) mean that we can have this benefit
even without doing anything more than moving the global variable into one of
those namespacing mechanisms.” This describes Singleton as global and I like
this description because it really drove home what Singleton is and how to use
it, I think….

                Overall,
I know I will be learning about many other design patterns and implementations
but for now having this is good enough, who knows maybe others will be a better
solution than Singleton, but I think it is the simplest one to learn. Neward
mention couple of time the debate of Singleton vs statics and that is something
interesting on its own. Something that I will research a little bit more when
time allows. Until then I remember one thing my boss said about singletons back
in the day: “Singletons were used everywhere, and somehow it worked, we didn’t
know why, but it did.”

From the blog #CS@Worcester – Pawel’s CS Experience by Pawel Stypulkowski and used with permission of the author. All other rights reserved by the author.

The Singleton Design Pattern

Recently in CS-343, I have been introduced to the concept of
design patterns. These are essentially code-writing techniques that help make
code easier to describe, modify and maintain. There are a wide variety of design
patterns, each with different benefits and drawbacks. My last class on Thursday
ended as we began to cover the Singleton Pattern, and so I decided I would look
into Singleton in advance of our activities next class about it. My research
led me to Andrew Powell-Morse’s blog post “Creational Design Patterns:
Singleton,” which can be found here:

https://airbrake.io/blog/design-patterns/creational-design-patterns-singleton

This post, as you may expect, is focused on explaining the
Singleton pattern to the reader. Powell-Morse accomplishes this by using real-world
analogies to describe the concept of Singleton and a programming example to show
how the pattern can be implemented in code. I chose to write about this blog
not only because it explains Singleton well, but also because I found the
programming example interesting. The purpose of the Singleton pattern is to
ensure only a single instance of certain classes can exist at a time, which Powell-Morse
clearly states right at the start of the blog. This is a simple enough concept
to grasp, but Powell-Morse elaborates further by explaining that Singleton
should be thought of in terms of uniqueness rather than singularity. He uses real-world
examples of uniqueness, those being individuality in people and a unique deck
of cards used for poker games, to describe situations in which Singleton can be
useful. These examples, especially the deck of cards, have helped me understand
that Singleton is useful in programs that only require a single instance of a
class, and I could definitely see myself applying this concept in future
projects to help reduce memory usage from unnecessary objects.

Since I found the deck of cards analogy especially helpful, I was pleased to discover that it was the focus of Powell-Morse’s programming example. However, the example’s complexity made it somewhat difficult to understand at first. Instead of simply demonstrating how Singleton is coded, Powell-Morse applies the concept to a program using multiple threads that separately remove card from the same deck instance. I have not written many programs myself that use multi-threading, and this lack of experience made the example confusing initially. The example is also written in C#, which is a language I am not nearly as experienced in as I am with Java. Despite my initial confusion, I eventually understood the example and grew to appreciate its complexity. The use of multi-threading in the example helped demonstrate a major drawback of Singleton and how to work around it in C#. The example not only taught me how to implement Singleton into my future coding projects using static variables, but it also showed me how to work around Singleton’s issues with multi-threading. This blog post also taught me more about a language that isn’t Java, which is always welcome.

From the blog CS@Worcester – Computer Science with Kyle Q by kylequad and used with permission of the author. All other rights reserved by the author.

Putting the ‘O’ in SOLID


We
covered the Open/Close principle in a recent lab, and it prompted a desire to
cover some of the SOLID principles. I have determined however, that the length
of this blog is short enough that I should dedicate it to a single principle at
a time, beginning with the aforementioned Open/Close principle.

              This particular object-oriented
principle was outlined by Bertrand Meyer and states:


“…entities should be open for extension, but closed for modification.”


              This of course, is a complex way
of expressing a simple guideline for software development; which is that new
functionality should be able to added without having to radically change existing
code. The operative word, extension, is illustrative. If your code was a home,
and you wanted to add more square-footage (read: functionality), you shouldn’t
need to knock all the walls, or the whole thing, down to add a bathroom.

              In the blog chosen, the author summarizes
a talk he gave about the very subject. In it, he provides an example of a program
he is developing for a company that calculates the total area of a set of rectangles.
As the customer requests more and more shapes be added to the calculator, the original
code changes drastically and gets longer and longer. This modification is opposed
to this principle. In opposition, creating a Shape class that contains many children
of different specific shapes – each with their own area function – with the
ability to add more, eliminates constant modification of the main class, and
allows for constant extensionability in the form of new shapes.

              To say simply that code should be
modular is unhelpful, as it is broad and the general definition of so many more
good coding practices. Our class and homework provide another perfect example.
Why should we have to constantly Override and rewrite portions of a superclass’
methods in each of its child classes to achieve proper functionality. Instead, in
making a Quack/Fly Behavior interface we have established a broad mold that
many new behaviors can be built off of; access to all of which is then granted
by the relationship to the single respective behavior interface.

              Therefore, the ability of code to be
extended with new functionalities, using a single reference – in this case the behaviors,
or shapes – instead of needing to rewrite or overwrite code is what is meant by
extension; while keeping the existing code from needing constant revisions, is being
closed for modification. Like the house example earlier, you should constantly be
building out, not renovating what exists already.


Sources:

A
simple example of the Open/Closed Principle

From the blog CS@Worcester – Press Here for Worms by wurmpress and used with permission of the author. All other rights reserved by the author.

The KISS and RoP Trade-off

I’ve discussed trade-offs in software before. The more you delve into the world of design and engineering, the more evident it becomes that trade-offs are an underlying principle to rule all principles. There is no perfect software; there is no perfect solution. My aim to strive for the best software, however, led to me choose this topic to discuss.

I intended to keep it simple and focus on KISS (Keep it Simple, Stupid), which claims that a simple, straightforward solution is the better solution. But after listening to a podcast about design principles by SoftwareArchitekTOUR*, I was forced to make a trade-off between a simple blog post and a more powerful, generic one. Specifically, they mentioned the RoP (Rule of Power) principle or GP (generalization principle), which states “that a more generic solution is better than a specific one”.

You will never produce a perfect design… but you can’t typically have both [KISS and RoP]. Either it is simple and specific, or it’s very generic and flexible and therefore no longer simple but rather has a certain degree of complexity. You can typically only strive for a Pareto-Optimal Solution.”

Christian Rehn (translated from German), SoftwareArchitekTOUR Episode 60, 14:30 – 15:45

Rehn says it’s a balancing act, and neither principle is either right nor wrong. The concept of a Pareto-optimal solution is rather simple: find a solution that is better in at least one aspect, but at least as good in every other aspect. As Rehn says, you have to ask yourself what works best in your situation. What is the best compromise?

So what does this mean when choosing between KISS and RoP? Like any new concept or principle, both should be practiced in isolation, with small projects. It is only with this basic experience that you can learn how to make these decisions in more complicated systems.

In learning design principles, I have refactored code in order to make it adhere to a design pattern or principle. When new situations arise, the old solution may no longer hold water. But each time, the code was kept as simple as possible given the requirements. The Pareto-optimal solution is the simplest solution that provides the required functionality. This leads me to conclude that KISS trumps RoP, especially considering other principles such as YAGNI, which strives to prevent the code smell of unneeded complexity. But on the other hand, the simplest solution in which every class does a single, specific thing, would itself become extremely verbose, violating DRY in particular.

Determining how to make these trade-offs is an art and comes with experience. In the mean time, I plan to keep my code as simple as possible and add complexity and generalizations as needed.

* This podcast is only available in German, but the English show notes explain the topics of the podcast in detail.

From the blog CS@Worcester – Inquiries and Queries by ausausdauer and used with permission of the author. All other rights reserved by the author.

Understanding Continuous Integration and Continuous Delivery (CI/CD)

While good testing and version control habits are always helpful and save a lot of time, the process of committing changes, running tests, and deploying the changes can in itself be time consuming. This is especially true in an agile development process, which aims to make small, incremental changes in a series of sprints. Ideally, the process of committing, testing and delivering could be done in a single command.

This is where CI/CD comes in. An Oracle blog post on CI/CD describes the software development pipeline as being a four-step process: commit, build, automated tests, and deploy. CI/CD involves automating this process so that a single commit results in changes deployed to production, so long as the changes can be built and pass the tests. However, any of these stages can be skipped and not all of them have to be automated if it doesn’t make sense to do so. Additionally, other stages can be added if desired.

Oracle stresses the importance of testing in this process, and the requirement to have a suite of unit, integration, and systems tests. These are important to have for any project, but are vital if a commit is automatically deployed to production to make sure users are getting a reliable product.

Gitlab uses Runners to run the jobs defined in a file named “.gitlab-ci.yml”. Jobs can be defined in the stages mentioned above, or any arbitrary number of custom stages, to create a single pipeline. When a commit is made, the pipeline lists the stages and whether they pass. Of course, unit tests should be run before a commit to ensure that the code is behaving as expected, but once a change is committed the rest of pipeline can be automatic.

In large systems, this automation is especially useful. You may be not able to compile an entire code base on a local machine. You may not be working on the same schedule as the rest of your team. Automating other portions of testing and deployment allows a single developer to finish a feature or patch on their own and reliably get it into the hands of users.

In researching software engineering jobs, it seems that many companies use Docker as part of their CI/CD process. Imagine you want to make sure your software runs on Mac and Windows machines. Docker can use predefined images to build isolated containers in which the code can be run. A pipeline can contain separate build stages, for example, to make sure the code builds on both operating systems before running tests and deploying. Next week, I will look at Docker in more detail and describe how it can be applied to software testing.

From the blog CS@Worcester – Inquiries and Queries by ausausdauer and used with permission of the author. All other rights reserved by the author.

Static VS Dynamic Testing


In Software Testing, the two
most popular or important methods for testing are static and dynamic. While both
are obviously named, its good to go over the distinctions, and more importantly,
the actual implementations of both.

           Static
testing involves review of project documents, including source code, to weed
out any actual or potential security flaws and general mistakes. This takes the
form of Informal and Technical Reviews, Inspections, Walkthroughs, and more. This
process can involve an inspection tool or be performed manually, searching
for potential run-time problems, but not actually running the code. Dynamic testing
conversely involves actually executing code looking for functionality,
resource usage, and performance. We have been using dynamic testing, specifically
unit testing, in our class so far. Dynamic testing is used mainly to verify the
program runs in accordance with the specifications outlined for it, specifically
called System Testing.

           Of
course, both have their own advantages and disadvantages. In static testing you
have the potential to find bugs that may have made later development more
troublesome early in the process, which can save lots of time and frustration in
the future. However, there may exist some flaw that only a running program can
reveal, but it only in the code that is executed. It would seem to me, that
these should not be exclusive and in fact, they have a logical order.

           You
would begin with static testing, like proofreading a paper, searching for misspelled
words (poor variable/method names), run on sentences (needless complexity or
repetition), and reorganizing to best get your point across (logical order of
declarations/methods, documentation). Like writing a paper, it is best to do
this ahead of run-time (reading through), that way you don’t have to constantly
stop to bandage small errors. In the same way it seems essential to ‘proof-read’
your code before execution to make sure it is free from identifiable errors
before you move on to seeking out run-time errors. You wouldn’t want to have to
fix both at the same time like with proof-reading.

           In sum,
these two could be grouped as pro-active or reactive, static and dynamic respectively,
and best explain their use. As mentioned, these also make sense to utilize in a
specific order, ensuring code is as correct as possible upon inspection, then
running it to see where flaws undetectable in a static environment have arisen.
These together ensure quality software that is optimized and secure.

Static Testing vs Dynamic Testing: What’s the Difference?
Static Testing vs. Dynamic Testing

From the blog CS@Worcester – Press Here for Worms by wurmpress and used with permission of the author. All other rights reserved by the author.

Moving from Gitlab to Github

Let’s face it, there is no one perfect platform for hosting repositories. This isn’t a debate about whether Github is better than Gitlab. They both have their pluses and minuses. Recently, we had to move a repository from Gitlab to Github. Without using a migration utility we were able to successfully migrate by following a few steps which I will outline below. This is the approach that we utilized and worked for us using these tools: · Gitbash (https://gitforwindows.org/) · Notepad++ (https://notepad-plus-plus.org/downloads) At a high level the steps we followed are (*Note* this a high level overview see the steps below for the details): · Fork and Clone from Gitlab · Move, find and replace all pointers at the file system level · Create an empty repository on Github · Fork and clone the empty repository from Github into a new folder · Copy all the Gitlab project folder to the Github project folder · Push to the Github repository Read the below steps a couple of times. DO NOT just skim them (something I am guilty of doing frequently) but read through them and make sure to pay attention to the NOTES. Steps/Details: To demonstrate the steps taken I’ve created two folders GitLab and GitHub. We’ll start by working in the Gitlab folder first. From your Gitlab dashboard fork and clone the repository into the local GitLab folder. Navigate to the GitLab repository folder locally. The goal here is to go folder by folder and using Notepad++ replace all instances of gitlab with github. Take your time and do each folder individually. Not every file will have a reference to gitlab in it but Notepad++ has a handy replace all feature that will make this short work. As you are traversing the folders rename any folders that are named gitlab to github as well. Once you finish renaming everything open git-bash in the gitlab folder and copy everything to your github folder: CP is the bash copy command -avr translates to -a = preserve attributes, file modes, ownership, timestamps… -v = verbose output -r = copy directories recursively On Github create a new empty repository. Open a new git-bash window from the new github repository folder you copied everything into. To retain your Git history follow the instructions from Github to push an existing repository from the command line (make sure you are in the github repository folder) : NOTE: If you get this error: fatal: remote origin already exists Do not proceed with the git push. Run this from you git-bash session: git remote rm origin If you don’t receive any additional errors re-run the git remote add origin followed by the git push. At this point you have moved your repository to Github. Verify your app is working correctly and if you are getting any errors go back and double check to make sure you changed all the references from gitlab to github everywhere including folder names. #CS@Worcester #CS448

From the blog Home | Michael Duquette by Michael Duquette and used with permission of the author. All other rights reserved by the author.