Category Archives: CS@Worcester

Template Design Pattern

For my blog this week I choose to write to about the template design pattern. Picked it because I was recently learning about what design patterns were. So I was google and and came along a list of different ones and I liked the template design pattern because it seemed to be useful and fairly simple so I picked it.

The article first talks about how the template design pattern is used for the base of an algorithm where the steps always happen in the same order and sometimes some or none of them are the same most of the time. Then they talked about an example of an algorithm to build a house were the steps are always first to build the foundation then to build the pillars. Next to build the walls and last to build the windows. They also said that these have to built in the same order every time. Then they started talking about the first class which was an abstract template with the first method being a method to build the house which its self called all of the other steps in the process which were abstract methods. They also said the since the method to build the house was the algorithm to build the house it should be final so it cant be changed. They also you leave the step methods unimplemented or not if they are mostly going to be the same. Then for each type of house you create a new class to build it extending the template class while overriding the unimplemented methods or any methods you need to change.

This design pattern helps with the repetition of having to type the algorithm over and over for each subsequent class you make. It also lets you make reuse so of the steps if they never change. It does have some disadvantages though like how you cant change the order of the main algorithm and that you have to override most of the methods everything. So even if several of the implementations use the same step you still have to rewrite them anyway. I learned a lot from this article first of all I learned a completely new type of design pattern. I think this pattern is very useful especially with algorithms as they said. I also learned just by looking at the list on the website that were far more types of design patterns then I originally thought.

Template Method Design Pattern in Java

From the blog CS@Worcester – Tim’s Blog by therbsty and used with permission of the author. All other rights reserved by the author.

Getting a solid grasp of SOLID

For this week’s blog, I have decided to go over the SOLID set of design principles. The blog article “SOLID design principles: Building stable and flexible systems” by Anna Monus describes SOLID and gives solid examples of each design principle with code and UML diagrams.

Single responsibility

Each class should only have a
single responsibility, such as handling data or executing logic. While this
increases the number of classes in a program, this will allow easier
modification of the code and code clarity.

Open/Closed

Classes should be open to extension
and closed to modification. This means that new features should be introduced
through extension and not through modifying existing code. The open/closed
principle allows programs to expand with new features without breaking the old.

Liskov substitution

Named for its creator Barbara
Liskov, the Liskov substitution principle states that a subclass should be able
to replace its superclass without breaking the properties of the program. This
means that a subclass shouldn’t change its superclass’ characteristics, such as
return types. Following this principle will allow anything that depends on the
parent class to use the subclass as well.

Interface segregation

Interface segregation states that
classes should not have to implement unused methods from an interface. The
solution is to divide the interface so that classes can implement only the
desired methods. I found this principle to be easier understood from a visual
example and I found the article’s UML diagram for interface segregation useful
for this.

Dependency inversion

High and low-level modules should
depend on abstractions. It is common to think that high-level classes should
depend on low-level classes, but this makes expanding the code difficult so
it’s best to have a level of abstraction between the two levels. The article’s
UML example for this principle shows how the abstraction level allows for easy
expansion.

The SOLID principles are important for code architecture as
it makes code expansion simple and easy to understand. I have found myself
applying SOLID principles to a project I have been playing with, a simple GUI
animation. Originally, I had drawn objects handling their own draw and movement
methods, but by using the single responsibility principle I separated the
movement-based code to its own class and used composition between classes. This
allowed for me to be able to use the movement code (contains position,
velocity, and acceleration values and methods) for all the different objects
that I make. I also made use of the O, L, and D of SOLID to handle the drawn
object hierarchy allowing my frame class to depend on an abstraction of all my
drawn objects. I use a loop to cycle through all drawn objects in a linked list
that’s type is an abstraction of all drawn objects. I can tell that the
structure of the code has made adding new functionality easy and intuitive.

Article Link:         https://raygun.com/blog/solid-design-principles/

From the blog CS@Worcester – D’s Comp Sci Blog by dlivengood and used with permission of the author. All other rights reserved by the author.

Take a REST

What is the RESTful architecture style and what is it good
for? Well I am here to break that down for you. REST stands for representational
state transfer, and it is an interface that can be implemented in order to help
with abstracting resources. In order for a system to be considered RESTful,
they must implement 5 guiding principles. The first principle is client-server
which deals with the separation of user-interface and data storage. The next principle
is that it must be stateless, meaning when information is requested, all the information
must be present and cannot be stored on the server. The next principle of REST
is that the data must be labeled either cacheable or non-cacheable. If the
information is cacheable, it is stored on a client cache. The fourth principle
deals with having a uniform interface. The next principle is the layered system
which allows for a hierarchy of layers to implement constraints.

Now that the principles have been laid out, the next main
part of REST is the information and data it deals with. A resource in REST is
the abstraction of information. Resources can be anything containing a name from
documents, pictures, and so on. REST then uses resource identifiers to be able
to find what resource is needed. Resources contain resource representation,
which is a timestamp of the resource containing the data, metadata, and hypermedia
links. REST contains resource methods which can be used for working with the data.
Many people associate REST with HTTP methods of GET/PUT/POST/DELETE however
since REST has a uniform interface, the user will be able to decide which
resource methods to use. However, with REST you are able to utilize these HTTP
methods in order to help with resources. The most common implementation has GET
to retrieve resources, PUT to change or update a resource and POST to create a
resource. Obviously DELETE is used to delete resources.

RESTful API’s are very beneficial when working with cloud
computing and working with the web. Because REST does not store any information
between executions; is stateless, this allows for the for scaling. This also
means that the if anything fails, it will be easy to re-work since nothing was
stored on the server. This makes it particularly useful for websites as well because
a user will be able to freely interact with the website while not storing any
information with-in it. If you want to read more on REST and RESTful API, look
into these two websites.

https://restfulapi.net/

https://searchapparchitecture.techtarget.com/definition/RESTful-API

From the blog CS@Worcester – Journey Through Technology by krothermich and used with permission of the author. All other rights reserved by the author.

Test Driven Development: Formal Trial-and-Error


Test Driven Development (TDD), like many concepts in
Computer Science, is very familiar to even newer programming students but they
lack the vocabulary to formally describe it. However, in this instance they
could probably informally name it: trail-and-error. Yes, very much like the
social sciences, computer science academics love giving existing concepts fancy
names. If we were to humor them, they would describe it in five-ish steps:

  1. Add
    test
  2. Run
    tests, check for failures
  3. Change
    code to address failures/Add another test
  4. Run
    tests again, refactor code
  5. Repeat

The TDD process comes
with some assumptions as well, one being that you are not building the system
to test while writing tests, these tests are for functionally complete
projects. As well, this technique is used to verify that code achieves some
valid outcome outlined for it, with a successful test being one that fails,
rather than “successful” tests that reveal an error as in traditional testing.
Related as well to our most recent classwork, TDD should achieve complete
coverage by testing every single line of code – which in the parlance of said
classwork would be complete node and edge coverage.

Additionally, TDD has
different levels, two to be more precise: Acceptance TDD and Developer TDD. The
first, ATDD, involves creating a test to fulfill the specifications of the
program and correcting the program as necessary to allow it to pass this test.
This testing is also known as Behavioral Driven Development. The latter, DTDD, is
usually referred to as just TDD and involves writing tests and then code to
pass them to, as mentioned before, to test functionality of all aspects of a
program.

As it relates to our coursework, the second assignment involved writing tests to test functionality based on the project specifications. While we did not modify the given program code, at least very little, we used the iterative process of writing and re-writing tests in order to verify the correct functioning of whatever method or feature we were hoping to test. In this way, the concept is very simple, though it remains to be seen if it stays that way given different code to test.

Sources:

Guru99 – Test-Driven Development

From the blog CS@Worcester – Press Here for Worms by wurmpress and used with permission of the author. All other rights reserved by the author.

Follow the Yellow Brick Road

Path testing peaked my interest when discussed in my CS-443 Software testing class, so I decided to dig deeper into the topic and see what others said about the testing method. I found an Article on GeeksforGeeks that focused on Path Testing. this type of testing focuses on the path of the code itself. calculating the complexity by McCabe’s Cyclomatic Complexity = E – N + 2P, where E = Number of edges in control flow graph, N = Number of vertices in control flow graph, P = Program factor. The advantages of Path Testing are reducing redundant tests, and focusing on the logic of the program.
Path testing seems to focus on the specified program and create the most appropriate test cases based on that program which in turn allows for best possible tests to be performed. Understanding code in a node graph way allows the tester to accurately understand the program and what needs to be tested and what can be tested individually or as a group. I really like the way that path testing views code because it is easy to understand and follow. Path testing, to me, is a directed path of testing that most people do without realizing it on a much simpler scale and because of its complexity calculations, it has more concrete evidence to support the style of testing.

Link to Article Referenced: https://www.geeksforgeeks.org/path-testing/

From the blog CS@Worcester – Tyler Quist’s CS Blog by Tyler Quist and used with permission of the author. All other rights reserved by the author.

Integration Testing

Integration testing is the second step in your overall testing process. First off you will perform you Unit test, testing each individual component to see if it will pass. However, your testing process is just getting started and is not ready for a full system test. After the Unit test and before the system test you must run an integration test. This test is the process in which you combine all your units to test for any faults in the interaction between one another. Yes each individual unit might pass on its own but having them work together simultaneously in an integral part of the program. 

These are the many approaches one can take to Integration testing:

  • Big Bang is an approach to Integration Testing where all or most of the units are combined together and tested at one go. This approach is taken when the testing team receives the entire software in a bundle. So what is the difference between Big Bang Integration Testing and System Testing? Well, the former tests only the interactions between the units while the latter tests the entire system.
  • Top Down is an approach to Integration Testing where top-level units are tested first and lower level units are tested step by step after that. This approach is taken when top-down development approach is followed. Test Stubs are needed to simulate lower level units which may not be available during the initial phases.
  • Bottom Up is an approach to Integration Testing where bottom level units are tested first and upper-level units step by step after that. This approach is taken when bottom-up development approach is followed. Test Drivers are needed to simulate higher level units which may not be available during the initial phases.
  • Sandwich/Hybrid is an approach to Integration Testing which is a combination of Top Down and Bottom Up approaches.

I think this part of the testing process is the most interesting. Once you have individually working components it’s like making sure the puzzle pieces fit. 

Integration Testing. (2018, March 3). Retrieved from http://softwaretestingfundamentals.com/integration-testing/.

From the blog cs@worcester – Zac's Blog by zloureiro and used with permission of the author. All other rights reserved by the author.

C4 Models

The c4 model is a tool used by programmers to articulate their software designs in a translatable manner. Using this model a programmer should be able to communicate their design easily to those outside the professions, say a stakeholder for example, and also to their programming team. The model should have levels and using “abstraction first” the levels should show different amounts of complexity that overall allow you and your team to implement the data. A great example of the overview of the c4 model is to think of google maps and their zoom feature. If you are looking at a country and are zoomed out so that you can see it in its entirety, then you may only see the name of the country and what is surrounding it. Then if you zoom in a bit you may see the states that make up the country. Zooming even more, you can see the cities, towns, and you may zoom all the way in to a specific location and see what is there. This allows the viewer to utilize the abstraction to avoid being overloaded with information and instead look at the data in layers of complexity. 

C4 models are implemented to represent the three levels of design. The three levels are known as:

System design refers to the overall set of architectural patterns, how the overall system functions—such as which technical services you need—and how it relates to larger enterprise contexts. System design is shown in a Context diagram.

Application design refers to the combination of the services that are needed and how to implement them. Application design is shown in a Container diagram.

Service design refers to the patterns and considerations that are involved in implementing specific services. This type of design starts to emerge in a Component diagram”.

Relating these three levels of design to the google maps example starts to make things clear. The system design would be the view of a country in its entirety without too much detail, or in other words the big picture. Application design is a bit zoomed in and could show the states that makeup the country and helps to show the relationship between data inside the big picture. Service design would be the specifically zoomed areas where you can see in much more detail and see the function of data.

This c4 model stands out to me because it helps to bridge the gap between those involved in the field and those who are not. It provides a concise and organized way of communicating project ideas and patterns to your team  and to whom you are working with that may not understand the other diagrams programmers use. For example, UML diagrams are great for programmers but you don’t want to present that to someone who isn’t a programmer. They will have no clue what they are seeing, but with a c4 model you can capture the big picture and have the details on hand as well.

(n.d.). Retrieved from https://www.ibm.com/garage/method/practices/code/c4-model-for-software-architecture/.

From the blog cs@worcester – Zac's Blog by zloureiro and used with permission of the author. All other rights reserved by the author.

REST APIs

The latest in-class activity from CS-343 introduced me to Representational
State Transfer (REST), which is an architecture used by Client-Server APIs. The
activity was helpful in explaining standard HTTP methods which are used by REST,
specifically GET, PUT, POST, and DELETE, but it didn’t really focus on
explaining what REST actually is and how APIs that use it are structured. For
this reason, I decided to further look into the fundamentals of REST and how to
use it. While researching, I came across a blog post by Bivás Biswas titled “How
not to blow your REST interview.” The post can be found here:

While this blog does indeed give interview tips, it also helps
explain REST and the design principles it follows. Biswas focuses on five main
principles of REST that RESTful APIs follow, which include the contract first
approach, statelessness, the client-server model, caching, and layered
architecture. I chose to share this blog post because its organization of its
information on REST helped make it easy to follow and understand. For this reason,
I think the blog is an excellent resource for learning about REST, and I could
see myself coming back to it as a reference if I work with REST in the future.

I liked that Biswas opened the blog by acknowledging common
misconceptions about RESTful APIs that he has heard in interviews. One of these
misconceptions was that RESTful APIs simply follow the HTTP protocol, which is
a misconception I may have developed myself due to the aforementioned class
activity being focused on HTTP. The fact that this was immediately stated as
incorrect helped indicate to me that REST was more detailed and complex than I
understood from class.

I also thought that Biswas’ approach to explaining the five
principles of REST was particularly effective. He makes use of analogies and
examples to demonstrate each concept instead of relying on technical terms that
newcomers to the topic, such as myself, would likely not understand. For
example, he explains the contract first approach with a mailbox analogy by
suggesting that applications can get the same data without changing URIs in the
same way that people can get their mail without changing mailboxes. Similarly, layered
architecture is explained by comparing an API’s Gateway layer to a bed’s
mattress. Much like a bed frame can be changed without affecting how the
mattress feels, changing the fundamental layers of a RESTful API does not
change how applications interact with the API’s Gateway. Analogies and examples
always help make complex concepts easier to understand for me, and their use in
this blog greatly helped increase my understanding of REST and its 5 core
principles. I am by no means an expert on REST just because of this blog, but
it has certainly helped prepare me to learn more about it in the upcoming class
activities.

From the blog CS@Worcester – Computer Science with Kyle Q by kylequad and used with permission of the author. All other rights reserved by the author.

Docker and Automated Testing

Last week, in my post about CI/CD, I brought up Docker. Docker can be used to create an “image”, which is a series of layers that are built upon each other. For example, you can create an image of the Ubuntu operating system. From there, you can define your own image with Python pre-installed as a second layer. When you run this image, you create a “container”. This is isolated and has everything installed already so that you, or anyone else on your development team, can use the image and know reliably that it has all necessary dependencies and is consistent.

Of course, Docker will get much more complicated and images will tend to have many more layers. In projects that run on various platforms, you will also have images that are built differently for different versions.

So how does this apply to CI/CD? Docker images can be used to run your pipeline, build your software, and run your tests.

The company released a Webinar discussing how CI/CD can be integrated with Docker. They discuss the three-step process of developing with Docker and GitLab: Build, Ship, Run. These are the stages they use in the .gitlab-ci.yml file, but remember you can define other intermediate stages if needed. The power of CI/CD and Docker is apparent, because “from a developer’s perspective, all you have to do is a ‘git push’ — and that’s it”. The developer needs to write the code and upload to version control and the rest is automated, with the exception being human testers who give feedback on the deployed product. However, good test coverage should prevent most issues and these tests are more about overall experience.

Docker CI and Delivery Workflow
From Docker Demo Webinar, 4:49

Only five lines of added code in .gitlab-ci.yml are necessary to automate the entire process, although the Docker file contains much more detail about which containers to make. The Docker file defines the created images and the code that needs to be run. In the demo, their latest Ubuntu image is pulled from a server to create a container, on which the code will be run. Then variables are defined and Git is automated to pull source code from the GitLab repository within this container.

Then, a second container is created from an image with Python pre-installed. This container is automated to copy the code from a directory in the first container, explained above. Next, dependencies are automatically installed for Flask, and Flask is run to host the actual code that was copied from the first image.

This defines the blueprint for what to be done when changes are uploaded to GitLab. When the code is pushed, each stage in the pipeline from the .gitlab-ci.yml file is run, each stage passes, and the result is a simple web application already hosted from the Docker image. Everything is done.

In the demo, as should usually be done in practice, this was done on a development branch. Once the features are complete, they can be merged with the master branch and deployed to actual users. And again, from the developer’s perspective, this is done with a simple ‘git push’.

From the blog CS@Worcester – Inquiries and Queries by ausausdauer and used with permission of the author. All other rights reserved by the author.

Refactoring

Once again I was looking through the course topics for my CS-343 Class on Software Construction, Design, and Architecture where I came across refactoring. refactoring is the process of “editing and cleaning up previously written software code without changing the function of the code at all” by Sydney Stone in an article called “Code Refactoring Best Practices: When (and When Not) to Do It“. The technique of Red-Green-Refactor seems to be the most used type of refactoring process and focuses on the three steps of Red or consider what needs to be changed, Green or write enough code to pass the test written, and Refactor/clean up the code. Refactoring is clearly an important part of the software development process, but it is not something that I have used in my own coding experience so far. The continuous cleaning, optimizing of code, and adding of new functionality helps to ensure that the user receives the best experience using that code or program or tool. At the same time refactoring may not be the best solution due to time constraints and if the design or code is not worth refactoring because in some cases it’s much easier to just start from scratch.I have found myself in the hole of attempts to change and fix thing and I’m sure almost all people that code have fallen into this hole because its much more enticing to try to use or change something you already coded instead of starting over. I personally really like the style of Refactoring code and it seems to be more commonly used in things like games, applications, and other user based software.
I can certainly see myself using this technique in the future when working on code or being instructed to do a part of the refactoring process because it looks to be common practice in the field of Computer Science and especially software engineering. I knew that there was a process to updating and refining/adding new functionality to software but was unsure of the name or the actual process that takes place in order to yield the best results. I also had no idea that eclipse had a built-in automated refactoring support which makes me want to learn more about how to achieve this so I can apply it to my own software in the future. Testing also is an important part of refactoring so I can apply this method as I learn more about Software Testing and Quality Assurance which encourages me to learn more about good practices in both fields of Software engineering.

Link to Article referenced: https://www.altexsoft.com/blog/engineering/code-refactoring-best-practices-when-and-when-not-to-do-it/

From the blog CS@Worcester – Tyler Quist’s CS Blog by Tyler Quist and used with permission of the author. All other rights reserved by the author.