Category Archives: Week 3

Practice, Practice, Practice

People have always asked me: “why are you so smart?”. Well, the truth is: my intelligence and my memory are a result of the same process that leads to world-famous athletes – we practice, practice, practice. I study academically during my semesters, and I study recreationally during my vacations; a true thirst for knowledge always welcomes a refill to the chalice.

This constant flow of acquiring knowledge comes with a cost though; more often than not, I have been incorrect over and over again. However, such is life within the field of science – experimentation is the process of challenging what we know to falsehood, and seeing if its truth still holds.

Connecting this ramble to Apprenticeship Patterns, I am pleased to tell you that the pattern for this week is “Practice, Practice, Practice”. The product within is just how it’s advertised; this pattern emphasizes that beginners learn best through “doing”, by performing actions under the guidance of professionals.

I find it interesting that the idea of “practice making perfect” is challenged; the new idea becomes “practice makes permanent”. Upon further analysis, I do have to say that I agree with this confrontation of the old adage – learning techniques improperly can lead to code smells. Proper, constant feedback while learning will prevent the coding landfill from becoming overwhelmed.

Personally, I would have to slightly disagree with the idea that apprentices must rely on their own resources to practice. I understand that this self-reliance can develop an independent learning process for the apprentice; however, a true leader reduces their aid for an adept apprentice (instead of completely removing it). Sure, beginners need more attention, but lifelong guidance builds relationships and motivates crafters; it’s easier to experiment when you feel tethered to a base.

While I have always stuck to the idea of constant practice, I will consider the idea of getting more constant feedback for my future professional habits. Personally, I have always had a fear of being judged for my shortcomings (and have only confronted this fear recently). In order to become the best of the best, we must practice our katas, within a dojo of coding professionals. As seen with other patterns in the book, we must assume that we are the worst (in a positive way), and confront our ignorance. Through practice, practice, practice with breakable toys, we achieve the proficiency that we strive for.

From the blog CS@Worcester – mpekim.code by Mike Morley (mpekim) and used with permission of the author. All other rights reserved by the author.

Technical Debt

This weekend, I decided to research technical debt. I wanted to look further into this topic because one of my classes had activity introducing the term. I came across a wonderful blog discussing the different types of technical debt and how to manage them.

The blog: https://www.bmc.com/blogs/technical-debt-explained-the-complete-guide-to-understanding-and-dealing-with-technical-debt/#ref4

The blog lists four types of technical debt: planned technical debt, unintentional technical debt, unavoidable technical debt, and software entropy.

Planned technical debt: when a team has to compromise on the code but knows the risks and damages collecting the debt will have. An example is a team not spending time coding smaller aspects of a project in order to meet a deadline. The blog advises to keep a detailed record of the compromises to help the team keep track of what to address in the future.

Unintentional technical debt: arises from struggles with new coding techniques and problems from rollouts. An example of unintentional technical debt occurring is a design having many errors due to miscommunication between different teams working on a project.

Unavoidable technical debt: arises from changes in technology and business. A business may suddenly change its views on what features a project should have, and new technology may come out that can perform the functions more smoothly. This would make old code defunct.

Software entropy: occurs as software quality degrades. This can be caused by the changes developers (who do not fully understand the purpose of some code) make that yield more complex code. Overtime, the software will be riddled with errors and will be difficult to use.

The blog discusses three ways to manage technical debt, which are: assessing debt, communicating debt, and paying off debt.

Assessing debt: technical debt can be measured by how many days it would take developers to refactor or make changes to the target project, and the monetary value of that time can be calculated. This data could then be compared to upcoming significant events to help with cost analysis.

Communicating debt: it is important to properly convey the existence and impacts of technical debt to stakeholders so that fixes can be accounted for later.

Paying off debt: there are three ways to pay off technical debt: waive the requirement, refactor the application, or replace the application. Waiving the requirement would not set the team back in creating new features. Refactoring would help improve the structure of the code without changing program behavior. Replacing the application would result in new technical debt, but it should be limited and dealt with quickly.

This blog post taught me about technical debt in more depth, with the different types and different management aspects. I was curious on how technical debt could be calculated, but I’ve learned it can be measured by time spent to make the changes. Going forward, this information can help me understand what technical debt a project has and how to help deal with it. The graphics and the video the blog attached discussing debt was pleasant to view.

From the blog CS@Worcester – CS With Sarah by Sarah T and used with permission of the author. All other rights reserved by the author.

YAGNI.

Hello and welcome back to my blog! In this blog, I want to discuss YAGNI, which stands for “Ya ain’t gonna need it” or “You aren’t gonna need it.” My professor for CS-343 briefly mentioned it in class one day and I wanted to go over it more in depth. In the past, I’ve done something in my projects where I should have followed the concept of YAGNI instead. I made several methods to change a variable before I actually made the main methods of what that variable would do. In the end, it turns out the methods I made were useless toward my goal and I lost a lot of time. I hope to start applying the concept of YAGNI to my future programming in order to not waste time.

YAGNI is a really important concept in programming. Basically it means programmers and developers should only implement classes, methods, or whatever things they need only when they need them. By doing this, you can avoid doing unnecessary work and save a lot of time. When you think ahead and try to code a class or method that you think you will need in the future, it can be hard to know what exactly you need to include in it. The programmer has to do a lot of guessing and for a lot of the times, they guess incorrectly and end up not needing the feature that they spent some time on in the end. By following the concept of YAGNI instead, you don’t have to do all that guessing work and are also more focused on your current task. You should only develop things that you need once they become relevant. In a large project, YAGNI is especially beneficial for programmers and developers. Let’s say a programmer wants to design a feature they know they might need but aren’t sure if they need it or are unclear of how to implement it. By postponing the development of that feature, it can be more clear to the programmer/developer what they exactly need to do for that feature once it becomes relevant again. You should always ask yourself if the feature you are working on is really needed at the current moment. If it’s not needed, then you can take a note of it instead and come back to it later once it becomes relevant again. That way, you keep the project more simple and you program the features better since they are relevant and you have a more clear understanding of what to implement. And the most important thing, you save a lot of time with YAGNI.

 

Source: http://c2.com/xp/YouArentGonnaNeedIt.html

From the blog Comfy Blog by and used with permission of the author. All other rights reserved by the author.

week-3

Hello,

I am doing some class activities and looking over some questions ahead to save time for a thing or two. I came across the word “Behavioral Patterns” in class Act. 4 (Model 8); I got curious and looked it up. I found two articles that helped me understand the purposes, Problems with solutions, Real-World Analogy, Structure, Pseudo-code, Applicability, How to Implement, Pros and Cons, Relations with Other Patterns.

The Behavioral Patterns are concerned with providing solutions. It is about object interaction – how they communicate, how some are dependent on others, how to separate them to be both dependent and independent, and give both flexibility and testing capabilities—also, the assignment of responsibilities between objects.

The Behavioral Patterns cover many small parts to form the full extend of patterns. Like Interpreter, Template Method/Pattern, Chain of Responsibility, Command, Iterator, Mediator, Memento, Observer, State, Strategy, and Visitor.

Interpreter

The Interpreter pattern: Evaluate any language grammar or expressions. An excellent example; this pattern would be Google Translate, which deciphers the input, and shows us the output in another language. Another example would be the Java compiler. The compiler interprets Java code and translates it into byte-code that the JVM uses to perform operations on the device it runs on. Also, it represents a great way to write simple programs that understand human-like syntax. 

Chain of Responsibility – pass requests along a chain of handlers. Upon receiving a request, each handler processes the requestor gives it to the next handler in the chain. 

Command – Turns a request into a stand-alone object that contains all information about the proposal. This transformation lets pass requests as a method arguments, delay or queue a request’s execution, and support undo-able operations. 

Iterator – traverse elements of a collection without exposing its underlying representation (list, stack, tree, etc.)

Mediator – it reduces chaotic dependencies between objects. The pattern restricts direct communications between the entities and forces them to collaborate only via a mediator object.

Memento – it saves and restores the previous state of an object without revealing the details of its implementation.

Observer – define a subscription mechanism to notify multiple objects about any events to the observed entity.

State – lets an object alter its behavior when its internal state changes. It appears as if the thing changed its class.

Strategy – define a family of algorithms, put them into a separate class, and make their objects interchangeable.

Template Method – the outline of an algorithm in the super-class but lets sub-classes revoke exact steps of the algorithm without modifying its structure.

Visitor – It separates algorithms from the objects on which they operate.

 

From the blog Andrew Lam’s little blog by and used with permission of the author. All other rights reserved by the author.

Why Use Gradle?

Gradel is an automation tool that allows developers to quickly run tasks to prepare, compile, and test software. There are a handful of automation tools similar, but Gradle is one of the most popular and supported build automation tools. Without a build-tool, a developer would have a series of commands that must be manually typed into the terminal to execute a set of steps before code can be built and run. Doing this is very tedious and does not scale easily, especially for larger projects.

The Gradle build process is divided into three phases. The first is initialization, which prepares the coding environment for the build process. The main type of initialization is fetching dependencies for software projects. A developer is able to declare which libraries their code requires and Gradel will download them to the project library.

The second phase of the build process is configuration, where Gradel determines what needs to be executed for the software to run. In Gradel, each step is called a task and is run automatically. The order of the tasks is determined during this step and is defined based on defined in the Gradel configuration file. Each task can be dependent upon the last allowing multistep tasks to execute without user intervention. Gradel can also automate testing to be run during the build sequence. This allows for a developer to identify testing issues during development because the build will fail if a test does not pass. The purpose of running tests with Gradel is to identify run-time issues that the Java compiler will not pickup.

The final phase is execution, where each task is run by Gradel. At the end of this phase, the user will see the status of their build. A successful build means that the tasks ran without error and the software is able to be run. An unsuccessful build can occur for a number of reasons but usually occurs due to a failed test, unmet dependency, or compilation error.

I selected this blog because I wanted to learn more about the Gradel build tool. Especially because we will be using it for projects in this class. I am familiar with the build tool Webpack for NodeJS but I have never used a build tool for Java. There are similarities between them but the configuration for each varies greatly.

One difficulty I have had with Java is dealing with dependencies. In other languages such as NodeJS or Python, there are command-line tools that make installing libraries easy such as npm and pip. I will definitely use Gradel in my next project for managing dependencies, as I have yet to find a solution for this in Java, until now.


Reference https://docs.gradle.org/current/userguide/what_is_gradle.html

From the blog CS@Worcester – Jared's Development Blog by Jared Moore and used with permission of the author. All other rights reserved by the author.

DRY (Drying principle)

Do not repeat yourself

This term was coined in 1999 by Dave Thomas and Andy Hunt in the book they created The Programmatic Programmer. The definition they made was like “Every piece of knowledge or logic must have a single, clear representation within a system.” DRY in the use of software engineering, is the principle that has to do with reducing duplication in code, having as main source or “fragment” – that has reusable code in the moments when we need it.

If we are writing a module or even a code, we need to keep in mind the design principles of today and use them wisely. We have to make them our habit, so that we do not have to remember them every time. This will save development time, but also make our software software strong, and in this way we will have it even easier to maintain and expand this software.

DRY Violations- “We enjoy pressing” (or in other words, “Losing everyone ‘s time.”). This phrase means that we can write the same code or logic over and over again.

How to get dry- Less code is good: This saves effort and time for it, it is much easier to maintain, but above all, it reduces the chances of any defect.

Kiss: Keep It Simple, Stupid-KISS Principle is an indicator that shows you how to keep the code clear and simple, to make it as easy to understand as possible. However in general programming languages should be as understandable as possible by humans, because computers the only way they can understand is only with 0 and 1. Therefore we must keep the coding as simple as possible but also direct. We should keep our methods no matter how small, but these methods should not always contain more than 40-50 lines.

KISS Violations – It is very likely that we have all experienced a situation where we need to work on a project and have found a code written incorrectly. This makes us wonder why we wrote these unnecessary lines.

How to reach KISS- To avoid violations of the KISS principle, we should try to write a code as simple as possible and transform it into our code.

The benefit of KISS- If we have some functionalities written by a developer if it is written in irregular code, and at the same time we want another developer to make the modification in the same code, then first they have to understand the code.

If we have not yet discovered a productivity system, most of these may sound familiar.  The only difference here is that we have a process approach with a laser focus that is in unnecessary duplication.

Some of the tasks that rarely come up are:

-Increase tasks that are similar but unplanned, such as handling a customer complaint.

-Let’s not forget the annual tasks (or even the monthly ones, in cases when they are followed for a single week): such as billing, inspections, audits, greater maintenance.

– Ask others about their routine tasks. This helps us to fill the gaps.

References

https://henriquesd.medium.com/dry-kiss-yagni-principles-1ce09d9c601f

https://ardalis.com/don%E2%80%99t-repeat-yourself/

https://blog.softhints.com/programming-principles-and-best-practices-dry-kiss/#kisskeepitshortandsimple

From the blog CS@worcester – Xhulja's Blogs by xmurati and used with permission of the author. All other rights reserved by the author.

Dodging the Dangers of Docker

Greetings fellow programmers. Tonight I am pleased to write a rather useful blog, and one that has been “whipped up fresh” (meaning that I wrote the post rather quickly, and in one take). Anyways, onto the material!

As seen in class, it seems as though many of us (especially those who use Windows OS computers) have had trouble installing the “Docker” container program used for our homework. While I cannot guarantee this to be a “foolproof guide” that completely saves the day, hopefully it will provide some pointers and lead those who still struggle with its installation in the right direction.

Helpful hints for installing Docker (on Windows) include:

  1. Enabling “Virtualization” on your task manager “CPU”. It is usually enabled by default, but can be enabled (if disabled) at the BIOS settings screen.
  2. Installing Windows 10 Pro, Education, or some variant other than “Home”. Windows 10 Home will not work (for some reason). Windows 10 Education can be acquired for free from the WSU IT website with verification of a student ID.
  3. Enabling features such as “Hyper-V”. This task gets a bit more complicated; it involves going to the control panel.
An image to provide a visual aid for Tip #1. Please excuse the sloppily made encapsulation of “Virtualization”.

So, why did I post these tips, and especially so late after nearly everyone has already set up the software? To start, I don’t believe that everyone has the software set up, and even those who do may still experience some issues (such as myself). These tips, by the way, were derived from an article linked below (“How to Install Docker on Windows 10” by Harshal Bondre); said article can also be referenced in YouTube videos, class instructions, and other resources.

Personally, I chose this article for two reasons. First, I enjoy helping people, as mentioned in my introduction. I don’t want to see people having to go through the same struggles that I did when learning this material; nothing makes me happier than seeing someone else solve in 5 minutes a problem that took me 5 hours. Second, I firmly believe that maximum proficiency of a material is achieved when someone has the ability to teach it to someone else. If I am able to instruct others on how to install Docker, then it means that I know it by heart. I don’t just know the material so well that I answer questions right, I know it so well that I cannot answer them wrong.

As a student that enjoys helping others (and is aspiring to be a tutor, TA, or other form of future faculty), I may consider using the materials learned here to create a guide to setting up Docker. Alongside some colleagues, we can ensure that the Fall of 2021 will be the last time that any student has this much trouble installing essential class software.

From the blog CS@Worcester – mpekim.code by Mike Morley (mpekim) and used with permission of the author. All other rights reserved by the author.

Software Construction Log #1 – Visualizing Software Design and Modeling through Class Diagrams

            So far in my Software Development courses, the programming assignments I was tasked to work on have been either partially coded or straightforward enough for myself and other students to work on without much of a need of extensive planning beforehand. However, this is not the case when a programming project’s scope expands as the functionalities such project is expected to perform increases in complexity. At this point, simply getting head-first into programming features without at least a basic understanding on how features should interact with one another will do more harm to this project in the long run than save time in the short run.

            Before beginning development, it is important to create schematics for the project in order to properly convey how any components interact with one another and possibly optimize such interactions prior to development, as well as have solid and understandable documentation for the project after development. Recently, I was introduced to Unified Modeling Language (UML) and the class diagrams that can be created using this language and can be used for object-oriented development. For this post, I want to focus specifically on the concept of class diagrams overall instead of how they can be created with the use of UML-based tools.

            As I was researching resources on the topic of class diagrams, I came across the materials listed at the end of this post, though I will mostly focus on this one tutorial post titled UML Class Diagram Tutorial: Abstract Class with Examples on Guru99.Com. Although there exists extensive documentation regarding UML modeling and class diagrams, this article by Alyssa Walker introduces and defines the basic concepts of UML Class diagrams, as well as the types of relations between classes such as Dependency, Generalization, and Association, as well as outline the best practices for designing class diagrams. Though I was introduced to class diagrams conceptually while I was enrolled in a Databases course, I am now able to understand how the concepts used in such diagrams can transfer from database design to software design, specifically the concepts of Association and Multiplicity.  

           Moreover, by examining the class diagrams provided in the article, I can understand better why it is important to literally visualize the interactions between classes, especially when it comes to developing software in object-oriented languages; by seeing how the classes implemented will interact with each other, as well as how the features I wish to implement will add complexity to the project overall. Therefore, any factor that contributes to increasing complexity in a project in any capacity can be properly taken care of well in advance without sacrificing additional development time in the long run, as well as provide important documentation that can contribute in the maintenance of the project after development.

Direct link to the resource referenced in the post: https://www.guru99.com/uml-class-diagram.html

Recommended materials/resources reviewed related to class diagrams:
1) https://developer.ibm.com/articles/the-class-diagram/
2) https://www.microtool.de/en/knowledge-base/what-is-a-class-diagram/
3) https://www.ibm.com/docs/en/rsm/7.5.0?topic=structure-class-diagrams
4) https://www.bluescape.com/blog/uml-diagrams-everything-you-need-to-know-to-improve-team-collaboration/

From the blog CS@Worcester – CompSci Log by sohoda and used with permission of the author. All other rights reserved by the author.

Self-Directed Professional Development Post #3

For this week’s blog post, I’ve decided to learn more about API documentation. I found a YouTube video titled, “Get an overview of the Java API documentation and how to use it” in order to do this. The reason I picked this topic is because as I create and design programs/software, I will likely be referencing Java API documentation. Therefore it is important that I understand and know how to navigate this information. 

One of the first things this tutorial taught me is that the documentation I will be using will be under Java SE and not Java JDK. The speaker of this tutorial, Steve Perry, did not go too in depth into what Java JDK is so I ended up doing some additional research to learn more about JDK. JDK stands for Java Development Kit and it is a platform released  by Oracle Corporation to develop Java applications. Although this may seem like a trivial thing to learn, I appreciated learning more Java/programming jargon.

After explaining that the different classes in API documentation are grouped by modules, Steve selects the java.base module to walk through because of how commonly used it is. In modules, there are packages which are a group of related classes. The first package Steve introduces and reviews is the java.lang, which contains classes that are fundamental to the Java language. He explains that each package starts out with a description of the package, followed by its interface, classes, enumerations, exceptions, and errors. Steve also refers to the documentation of this particular class as a javadoc which is something I’ve heard before but now know exactly what it refers to. 

Steve then selects the String class, again, because of how commonly used it is. We then learn how to access String’s constructors and methods. Steve shows us how clicking on a method gives you more details, such as a method’s description, parameters, return values, etc. The last useful bit of information I learned from this video was how to access the information for classes if I don’t know what module or package they may be in. I can do this by using the index or clicking on the top left section titled, “All Classes” in order to do so.

Overall, I considered this to be a very worthwhile use of my time because API documentation is something I often get redirected to when I have a question about a particular class I’m using. However, since API documentation contains so much information, I often get overwhelmed and lost in all the classes and information available for a particular class. Because of this blog post and this tutorial, I was able to learn the organizational scheme for API documentation in a way that I will remember. Although it may seem like a small step, it’s a step that makes me a little more confident as a java programmer.

Tutorial link: https://www.youtube.com/watch?v=ULEOb8wLa_k

From the blog Sensinci's Blog by Sensinci's Blog and used with permission of the author. All other rights reserved by the author.

Analyzing Code Smells

This week, the topic that I wanted to focus on was design smells. I wasn’t confident in my own knowledge of what exactly design smells were or how to implement that knowledge into my designing of code. So I found a blog post by Sandor Dargo, in which he reviews the book Refactoring for Software Design Smells. Dargo’s blog has a background in software-design related book reviews, so it seemed like a good place to learn a little more about what design smells were.

The review covers a lot of the fundamentals of design smells, much like what we did in class, although it focuses more on what design smells affects within software design, such as understandability and reliability. I really liked this way of thinking about design smells, as simply learning them felt a lot more abstract to me, but showing what it affects and how it can harm the design of software makes it a little bit more understandable, to me at least. The first half of the post talks about the concept of technical debt, and the later half is more on the design smells. Obviously these both impact each other, but I liked his breakdown of technical debt as being similar to taking out a loan to get something done that needs to be done, but if you don’t keep up with that debt you will get further and further into debt until you fail. I think it is a very good analogy to technical debt, and aided in my understanding of the topic as a whole. The final part of the post focuses on common issues that are caused by design smells and how these seemingly insignificant things can make your code less and less usable over time.

I found this review to be extremely useful in my understanding of what design smells are and why code refactoring is so important. The most important aspect of this post to me was the discussion on technical debt. It is such an important aspect of software design to consider, as is allows us to make quick-and-dirty solutions when we need to, but also tells us how important it is to go back and change the code before it gets out of hand. We must design code in a way to make it as easy to use as possible, and easy to build on in the future. If we are able to properly document our code, allow for reusability within the system, and make use of abstraction and encapsulation to make code as simple to follow as possible, we can create an environment that can significantly increase not just our own workflow, but the workflow of any collaborators, or anyone who needs to make use of our code. It is our job as designers to make sure that as time goes on, our code gets easier to work with, especially as complexity increases to meet the demands of our clients.


Source:

https://www.sandordargo.com/blog/2021/01/30/refactoring-for-software-design-smells

From the blog CS@Worcester – Kurt Maiser's Coding Blog by kmaiser and used with permission of the author. All other rights reserved by the author.