Category Archives: Week 13

Encapsulate What Varies

When we write code, we try to think ahead to what possible changes we may need to implement in the future. There are many ways that we can implement these changes, ranging from slapping together a quick patch, methodically going through the code and changing all the affected parts, or writing the code in such a way that anticipated changes can be added in with just one or two small adjustments. This last method is what “encapsulate what varies” means. Writing code will often cause us to think about what future changes we need, and by isolating those parts of the code we can save ourselves time in the future. I found an article that does a good job explaining this concept, and while reading through it I was reminded of a recent project where using encapsulation ended up saving me a lot of time and headaches in the future.

The specific event that the article caused me to remember occurred during my most recent internship. One of the projects I worked on was a script that would automatically assemble 3D CAD models of of any of the systems the company was working on at the time. This script needed to read the system specifications from a database and then organize that data, identify key parts of the system, and figure out how it is assembled so that it can then send those instructions to the CAD software and create the 3D model. It was a big project, and I and the other intern working on it were daunted by the amount of ever changing data that would need to be accounted for. Many systems were of a unique design, and as such we couldnt use the same exact code for all systems. The engineers we were attatched to for this internship introduced us to something called python dataclasses. These essentially allowed us to structure parts of our code that we knew were going to be subject to change in such a way that adding or removing certain data points from the database doesn’t break the overall program. If any changes arise, we only need to alter the related dataclasses for the rest of the code to be able to work with the new change. Without these we would have had to create new methods/classes for each unique change every time it came up; which is not something anyone wanted. I am glad I found out a way of “encapsulating what varies” since I can now write better and more future-proof code by isolating the parts that I believe will be changed the most often.

https://alexkondov.com/encapsulate-what-varies/

From the blog CS@Worcester – Sebastian's CS Blog by sserafin1 and used with permission of the author. All other rights reserved by the author.

GRASP

What is GRASP?

GRASP, standing from “General Responsibility Assignment Software Patterns” is a design pattern in object-oriented software development used to assign responsibilities for different modules of code.

The different patterns and principles used in GRASP are controller, creator, indirection, information expert, low coupling, high cohesion, polymorphism, protected variations, and pure fabrication. GRASP helps us in deciding which responsibility should be assigned to which object/class.

The following are the main design principle

  1. Creator
  • Who creates an Object? Or who should create a new instance of some class?
  • “Container” obejct creates “contained” objects.
  • Decide who can be creator based on the objects association and their interaction.

2. Expert

  • Provided an object obj, whoch responsibilities can be assigned to obj?
  • Expert principle says that asign those responsibilities to obj for whoch obj has the information to fultill that responsibility.

3. Low Coupling

  • How strongly the objects are connected to each other?
  • Coupling – object depending on other object.
  • Low Coupling – How can we reduce the impact of change in depended upon elements on dependant elements.
  • Two elements can be coupled, by following if:
    • One element has aggregation/composition or association with another element.
    • One element implements/extends other element.

4. High Cohesion

  • How are the operations of any element are functionally related?
  • Related responsibilities in to one manageable unit.
  • Prefer high cohesion
  • Benefits
    • – Easily understandable and maintainable.
    • – Code reuse
    • – Low coupling

5. Controller

  • Deals with how to delegate the request from the UI layer objects to domain layer objects.
  • It delegates the work to other class and coordinates the overall activity.
  • We can make an object as Controller, if
    • Object represents the overall system (facade controller)
    • Object represent a use case, handling a sequence of operations

6. Polymorphism

  • How to handle related but varying elements based on element type?
  • Polymorphism guides us in deciding which object is responsible for handling those varying elements.
  • Benefits: handling new variations will become easy.

7. Pure Fabrication

  • Fabricated class/ artificial class – assign set of related responsibilities that doesn’t represent any domain object.
  • Provides a highly cohesive set of activities.
  • Behavioral decomposed – implements some algorithm.
  • Benefits: High cohesion, low coupling and can reuse this class.

8. Indirection

  • How can we avoid a direct coupling between two or more elements.
  • Indirection introduces an intermediate unit to communicate between the other units, so that the other units are not directly coupled.
  • Benefits: low coupling, e.g Facade, Adapter, Obserever.

9. Protected variation

  • How to avoid impact of variations of some elements on the other elements.
  • It provides a well defined interface so that the there will be no affect on other units.
  • Provides flexibility and protection from variations.

I chose to talk about GRASP because as a computer science major interested in software development, I was curious to learn more about this and how GRASP is used to assign responsibilities for different modules of code, how it provides a means to solve organizational problems.

rao.pdf (colorado.edu)

GRASP (object-oriented design) – CodeDocs

From the blog CS@Worcester – Gracia's Blog (Computer Science Major) by gkitenge and used with permission of the author. All other rights reserved by the author.

‘NoSQL’ Comparison

SQL databases use tables and relations to store data which makes them rigid in the way data is managed. Developers are forced to store data in a predefined way according to the table and database specifications. This strictness makes working with the data easier in the future because the data is highly structured. Given the table properties, a developer will be able to know the properties on each row of the table. The downside of the rigidity of SQL databases is that making changes and adding features to an existing codebase becomes difficult. In order to add a field to a single record, the entire table must be updated and this new field is added to all records in the table. In PostgreSQL, there can be JSON columns where unenforced structured data can be stored for each record. In this way, it is a workaround for the highly structured nature of SQL databases. However, this approach is not ideal for all situations, and querying data within a JSON field is slower than if it was in a table. SQL databases use less storage on average than NoSQL databases because the data is standardized and can be compressed using optimizations. However, when a SQL database grows, it usually must be scaled horizontally. Meaning the server running the database must have upgraded specifications rather than spreading the resources throughout more instances.

NoSQL databases use key-value pairs and nested objects to store data making them much more flexible compared to SQL databases. One of the most popular NoSQL databases is MongoDB. In these databases, tables are replaced with collections and each entry is its own object rather than being a row in a table. The document-based storage allows for each record to have its own fields and properties. This allows for code changes to be made quickly. The downside of having no enforced structure is that required fields can be omitted and expected data when it is not present on the object. MongoDB fixes the issue of no enforcement with features called schemas. Schemas are a way to outline objects with key names and the data type associated with them. Ensuring each object in a collection follows the same format. NoSQL databases are scaled horizontally easily, easing the resources by distributing the workload on multiple servers.

I selected this topic to learn more about the different use cases between SQL and NoSQL databases such as MongoDB and PostgreSQL. I will use what I learned on future projects to ensure I select the right database technology for the project I am working on.

From the blog CS@Worcester – Jared's Development Blog by Jared Moore and used with permission of the author. All other rights reserved by the author.

JavaScript Best Practices

As a trend from the previous posts, I am still working with JavaScript, and I am still learning more and more about it. In this article I learn some best practices around JavaScript itself. Some new things and some things common for any programming language.

https://www.w3.org/wiki/JavaScript_best_practices

There is not much to summarize for this article as it is simply some best practiced coding techniques for proper JavaScript coding. Like mentioned earlier it does include some things that by now I should already know and practice. As in comments should be as needed but not in excess, naming conventions should be simple and understandable, Big O notation matters and to optimize your loops and avoid nesting them and keeping to clean code style of one function one purpose instead of excess purposes inside a function that might be iterated out later or nonsensical to someone code reviewing. But there were some more JavaScript specific practices that were more related to web development.

                Progressive Enhancement is a concept that I get on a basic level just thinking in terms of providing a service to someone means going through the barriers of your platform to make sure they have access to it, like Microsoft office products working on mac. In this article it mentions the idea that when scripting or perhaps even JavaScript itself is not available to a platform that you need to manage the code in a style that will work with any platform. To me that seems easier said than done but it does make sense that if the interface to the user can be managed by something else before scripting is done then you achieve your goal of progressing your user base and opening your code up.

                Another practice I learned was regarding data security. That at any time any data being passed through my code should be checked first. I have heard examples of specific businesses being hacked due to a very specific fault in the design itself that left open vulnerabilities which lead to personal information being stolen. Most cases I have heard is simply the human aspect in security vulnerability where a hacker just calls to get access to a password for an account that can then access that data. But in the examples given in the article it is specific to making sure that the data passed to you does not fault in error and that there is some methods that allow you to discern data types from another to avoid further conflicts or generally avoid validation on the users end to prevent them messing with your own websites code.

From the blog CS@Worcester – A Boolean Not An Or by Julion DeVincentis and used with permission of the author. All other rights reserved by the author.

Anestiblog #7

This week I read a blog post that I thought really related to the class about custom software development. The blog started with a section about the process of custom software development. It consists of research, UI design, MVP development, testing, maintenance, and monitoring/support. The blog then goes into why it can be for you. It gives reasons like how it makes your business more unique, and personalized to the needs of your business. The next section finally goes into the 7 benefits. They are a personalized process, cost-effective, reliable, continuous support, flexibility, seamless integration, and exclusive ownership. I see all those benefits as a definite win, especially how you would be making it your own, and nobody can take it from you. The blog ends by going over how the advantages outweigh the negatives, and how they are too evident not to use.

 I selected this blog post because software development is my dream job, and I thought it would be interesting to read about custom software development since I have never heard of it before. This blog did not disappoint in that aspect, and I think it will help tons of others as well.

 I think this blog was a great read that I recommend for many reasons. One reason I would recommend this blog is because it goes in-depth on the different benefits. It makes a section for each benefit, and explains how it is a benefit. An example of this is with the cost-effectiveness benefit. The blog writes about how this is a huge benefit because since it is custom, you will only be paying to make it, but in the long run it will last you the entire time. Another reason is because custom software development could be something that you will need to know to do in the future, and it is good to know why it is so important. Then, when the time comes in the future, you will be ready. The final reason I will be going over is because it shows how the benefits outweigh the disadvantages. If anyone thinks custom software development is not worth it, maybe this blog could help them change their mind.

This blog taught me about how custom software development is really beneficial, and it should be used the most often. The material affected me heavily because it showed me that custom software development will be widely needed in the future, so I got to understand it. I will use this knowledge to try and further software development in my future.

From the blog CS@Worcester – Anesti Blog's by Anesti Lara and used with permission of the author. All other rights reserved by the author.

Dont Repeat Yourself

DRY, or “Dont Repeat Yourself” is an approach to writing code that emphasizes avoiding repetition in your code. In other words, “Dont Repeat Yourself” is essentially a way to tell developers to write methods for anything they think they might need to use in more than one place. For example imagine you are writing a program where certain parts of the code need to do similar things. One way to approach this problem is to write a segment of code to do the necessary task each time the need for it comes up. While this would work in practice, it is far from the best way to approach this issue. Solving the same problem by creating a separate method that achieves the intended goal and can be called whenever it is needed is a far better and more time efficient solution. Geeks for Geeks has a great concise article about this, and even gives some example using java code.

And that really it as far as “Dont Repeat Yourself” goes. Its a straightforward rule that helps keep developers from wasting time writing repetitive code snippets. While it may seem simple to implement, I know for a fact that I have had plenty of experience writing repetitive code. During my first internship especially. The issue came down to the ever changing project requirements, and my need to adjust my code to meet those requirements. In doing that I know that I definitely wrote repetitive code that could have been its own separate class or function, however as I was working on many different files and pieces of the code, it didnt resonate with me at first that some of this code can be written as one method and called as needed. Eventually while I was polishing up some of the code I realized this mistake and corrected for it. I wrote functions that accommodated most of what the repetitive code was supposed to do and replaced that code with calls to these new methods. This ended up causing many small bugs to pop up however, and I had to spend more time looking for them and fixing them. Had I slowed down when writing my code I would have been able to plan ahead and create these functions from the get go; saving me time and energy in the long run. Going forward I try to be more careful with the code that I write; and try to think ahead to what may need to be reused. Once I figure that out I can create a function for it and save myself time and energy later on.

https://www.geeksforgeeks.org/dry-dont-repeat-yourself-principle-in-java-with-examples/

From the blog CS@Worcester – Sebastian's CS Blog by sserafin1 and used with permission of the author. All other rights reserved by the author.

Habits of Efficient Developers

This is a presentation given at the WeAreDevelopers World Congress 2018. WeAreDevelopers is a Vienna based company designed to connect developers seeking jobs with companies presenting employment opportunities. They primarily do this through conferences and events where they host speakers discussing myriad topics relating to IT and Software Development. This specific presentation was given by Daniel Lebrero and he discusses four habits that he’s noticed that efficient developers have. He breaks all four of these habits down into smaller facets that exemplify the habit. E.g. breaking down “Fast Feedback” into “Test-Driven Development”, “REPL”, “Code Reviews” and “Continuous code reviews.”

I chose this specific presentation because it directly relates to what we do in class on a daily basis. Everyone should be making continuous progress to be more efficient in their work and being able to hear directly from someone in the industry what makes someone efficient, is one of the fastest and easiest ways to improve yourself. The speaker was clear and provided cogent, real world, examples of the habits discussed. He even coded little .js programs to show how a developer would utilize simple scripts to automate tedious work and explored different IDEs and CLIs that he was familiar with.

I found a lot of what he was saying held a lot of truth. I have noticed similar habits in hiring directors and successful people in the IT field. The two things in particular that resonated with me were the topics of “Focus” and “No menial work.” One thing that plagues my development cycles, whether that doing school work for one of my classes or tasks at my actual occupation, is distractions. Mr. Lebrero advises disabling notifications, to the point where you don’t even have the push notification number showing how many unread notifications you have. He claims that it takes between ten and fifteen minutes every time your work is interrupted to get back on task and to return to the headspace you were in prior to the interruption. I agree that, while I am working, any little distraction throws me completely off track and makes it difficult to work. It’s important to have a quiet and clean workspace and lines of communication that don’t impede your work flow. Of course if something is urgent and worth the ten or fifteen minutes it takes to get back to work, that’s something completely different. On the matter of “No menial work” however, there really shouldn’t be any excuses. Whether a task is worth automating comes with experience and perspective, it’s important to fully understand your task and what’s being asked of you before you try to automate it; as it may take longer to code a small automation script than to just tough it out.

From the blog CS@Worcester – Jeremy Studley's CS Blog by jstudley95 and used with permission of the author. All other rights reserved by the author.

Docker Swarm

Deploying a large application with Docker often requires a number of Docker containers to be running at the same time and on the same network. Rather than orchestrating this manually, Docker offers an orchestration tool that makes this more manageable: Docker Swarm. I’ve come across the phrase “Docker Swarm” on occasion, but I never had reason to really look into it. It seems like a very useful tool, and I wanted to explore it this week with Gabriel Tanner’s “Definitive Guide to Docker Swarm.”

A Docker Swarm is comprised of a number of Docker hosts that run in “swarm mode.” These hosts act either as a manager that manages relationships between nodes or a worker that runs services.

Tanner defines a node as “an instance of the Docker engine participating in the swarm.” A device may run one or more nodes, but it is typical in deployment environments to distribute Docker nodes across multiple devices. Manager nodes take incoming tasks and distribute them to worker nodes. They also maintain the state of the cluster and manage orchestration. A swarm should have multiple manager nodes in order to maintain availability and to avoid downtime in the case of a manager node failure. Worker nodes exist to carry out commands from the manager nodes such as spinning up a container or starting a service. A manager node can also be a worker node and is both by default.

Services in a Docker Swarm are the definitions of tasks to be carried out by nodes, and defining a service is the primary way a user interacts with a swarm. Creating a service requires the user to specify a container image to be used and a set of commands to be executed.

Docker Swarm offers a number of benefits. It contains a built-in load balancer that allows you to dictate how services and containers are distributed between nodes. Swarm is integrated directly into the Docker command-line interface and does not require the installation of any additional software. It is easy to scale and allows you to apply updates incrementally.

I chose this source because I wanted to look more in-depth at some features of Docker I don’t have much experience with. I was not really sure what Docker Swarm was used for, so the context I gained from this article will surely be useful. I did not have space to cover it in this post, but Tanner’s article also details how to set up a Docker Swarm. I will definitely be saving this information for future use.

From the blog CS@Worcester – Ciampa's Computer Science Blog by robiciampa and used with permission of the author. All other rights reserved by the author.

How to Develop Software

For my final blog post I wanted to write about an important topic: software development processes. This relates to another class I am taking, Software Process Management, but I wanted to write a blog post about the topic in this class because it encompasses everything we have done in this class. In the same way that design patterns lay out a method for solving problems, the software development process lays out a method for designing software.

In a blog post published by Diceus, “Step by step software development: 7 phases to build a product”, the software development life cycle is laid out in 7 steps: Brainstorming, Feasibility Analysis, Design, Programming, Integration, Quality Assurance, and finally Release. Some of these steps relate more to this class, and others relate more to Software Process Management, but all of them are necessary to release a functional product.

Brainstorming, also called planning, is the most important part of the software development life cycle. This is the step where you think of products that customers would want, and an abstract thought of how you would implement the idea.

The next step is the feasibility analysis. This step does not relate to this class so much, but it is important to decide if a project is worth working on.

The next step is the Design step. This is the step where you design the product. I would imagine this is where the API is designed (not programmed) since you have to build your product around the API. I would also assume that design patterns are discussed and chosen based upon the product requirements in this step. I am not sure if UML diagrams would be designed here or during the programming stage since the source does not say, but I would not be surprised if class hierarchy is considered when planning.

The fourth step is Programming. This is the longest step, and is the grunt work of the designing phase. If UML diagrams were not already designed, they certainly will be during this step when programmers actually implement the code. I would also imagine this is the step where docker containers are set up.

The fifth step is Integration. This is the step where a product is integrated into all sources and environments. This step might also be where docker is set up, but I am not sure since the source does not mention containers.

The sixth step is Quality Assurance. This is another step which does not relate to this class very much, although I suppose excess technical debt can be managed during this step if it was not already during the programming step.

The final step is Release. This is the final step when your product is released to the consumer. This is the goal of every software development process.

I think that a good, organized roadmap is crucial for any product’s development, but it is also very important for software. Software is hard to develop, and systems like this make it easier.

From the blog CS@Worcester – Ryan Blog by rtrembley and used with permission of the author. All other rights reserved by the author.

PlantUML, revised and expanded

As a big fan of LaTeX, I really enjoyed learning and using PlantUML in class. For those of you who do not know what UML is, UML stands for the Unified Modeling Language which is a language that allows us to create models/diagrams. It is commonly used to create some sort of visual to convey information to an audience. In class, we used PlantUML to show the relationship between classes but there are still a lot of things you can do with the language that we did not have time to cover in class.

In this first blog post by Rachel Appel, she goes over most of what we covered in class at a slightly slower pace.

In addition to covering what was covered in class, in her blog post, Ms. Appel also shows another way of using PlantUML diagrams. She shows us how to create USE diagrams. USE diagrams allow us to describe the relationship between users (or actors in Ms. Appel’s USE diagram) and the software and its components. I find this especially useful when describing to others the principle of user privilege or even from a managerial standpoint describing to your employees what information or resources, they have access to.

In the previous blog post, it talked about what kinds of diagrams you can make using PlantUML, and in this blog post, it talks about some style choices you can make in your diagrams and informs you about some features in PlantUML to organize your diagram.

https://www.codit.eu/blog/plantuml-tips-and-tricks/?country_sel=be

The blog post talks about many different features you can find in PlantUML. It talks about how to change the direction of the arrows, changing the box shape, changing the arrow color, and even adding a legend at the end of the diagram. The next topic that the post talks about text alignment and adding a background to your diagram. The post also talks about a couple of other really cool features you can do in PlantUML so be sure to give the post a quick read!

I picked this topic to do this week because it was in fresh in my mind after talking about it with one of my professors. It is also a topic that I found really interesting when we covered it in class at the beginning of the semester and is something I always wanted to learn more about. In the past, whenever I had to create a model for another class I used LaTeX, Excel and/or MatLab to generate the diagram and sometimes I had to jump through so many hoops to get the diagram to look the way that I wanted to. With PlantUML, there are a lot of built-in features that I can choose from and use so I can see myself using this knowledge in another class.

From the blog CS@Worcester – Just a Guy Passing By by Eric Nguyen and used with permission of the author. All other rights reserved by the author.