The Abstract Factory Design Pattern

This post on DZone talks about the abstract factory design pattern and gives an example implementation in Java using geometric shapes. This pattern is similar to the simple factory with the idea of constructing objects in factories instead of just doing so in a client class. It differs in that this abstract version allows you to have an abstract factory base that allows multiple implementations for more specific versions of the same original type of object. It also differs in that you actually create an instance of a factory object instead of just creating different objects within the factory class as in the simple factory.

I like the concept of this pattern more than just having a simple class that creates multiple instances of different objects such as the simple factory. I also like how the design allows you to have multiple types of objects that can split off into different more specific types, such as how the example Java implementation has 2D shapes and 3D shape types and factories for each kind. The design appears to be efficient, especially in the implementation example, only creating a factory for a type of object when it matches a specific type in the client call. Like the other factory pattern, you can also easily implement other design patterns for the object itself such as a strategy or singleton, which further would improve the final outcome. Another aspect of this pattern that I like is that the client itself is not creating the objects, it just calls the factory get method from a provider class that sits between the factory and the client.

I definitely like this pattern and will certainly consider using it the next time I have to create a program with many different variations of the same objects such as shapes or ducks as seen in previous programming examples. It will be especially useful to use this design if I am trying to type check the objects from user input to make sure they are trying to create a valid type of object with the factory. Overall, I am finding that as I read more articles about design patterns, especially for many objects of the same base, I am gaining a better understanding of how to maximize the program efficiency with one or multiple design patterns.

Source: https://dzone.com/articles/abstract-factory-design-pattern

From the blog CS@Worcester – Chris' Computer Science Blog by cradkowski and used with permission of the author. All other rights reserved by the author.

Figuring Out What They Expected

This week I read a post of Joel Spolsky, the CEO of Stack Overflow. This post talks about the term “user model”, “program model”, and making program model conform user model in a program. User model is users’ mental understanding of what the program is doing for them. When a new user starts using a program, they do not come with a completely clean slate. They have some expectations of how the program is going to work. If they’ve used similar software before, they will think it’s going to work like that other software. If they’ve used any software before, they are going to think that your software conforms to certain common conventions. They may have intelligent guesses about how the UI is going to work. Similarly, program model that is a program’s “mental model” is encoded in bits and will be executed faithfully by the CPU. If the program model corresponds to the user model, you have a successful user interface.

For example, in Microsoft Word (and most word processors), when you put a picture in your document, the picture is actually embedded in the same file as the document itself. After inserting the picture in the document, you can delete the original picture file while the picture will remain in the document. On the contrary, HTML doesn’t let you do this, HTML documents must store their pictures in a separate file. If you take a user who is used to word processors, and doesn’t know anything about HTML, and sit them down in front of a HTML editor like FrontPage, they will almost certainly think that the picture is going to be stored in the file. This is a user model. Therefore, program user (the picture must be in a separate file) does not conform user model (the picture will be embedded)

If you’re designing a program like FrontPage, you have to create something to bring the program model in line with the user model. You have two choices: changing user model or program model. It looks remarkably hard to change the user model. You could explain things in the manual or pop up a little dialog box explaining that the image file won’t be embedded. However, they are inefficient because of annoying and users not reading them. So, the best choice is almost always going to be to change the program model, not the user model. Perhaps when they insert the picture, you could make a copy of the picture in a subdirectory beneath the document file.

You can find user model in certain circumstances by asking some users what they think is happening after you describe the situation. Then, you figure out what they expect. The popular choice is the best user model, and it’s up to you to make the program model match it. Actually, you do not have to test on too many users or have a formal usability lab. In some cases, only five or six users next to you are enough because after that, you start seeing the same results again and again, and any additional users are just a waste of time. User models aren’t very complex. When people have to guess how a program is going to work, they tend to guess simple things, rather than complicated things.

It’s hard enough to make the program model conform to the user model when the models are simple. When the models become complex, it’s even getting harder. So, you should pick the simplest possible model.

Article: https://www.joelonsoftware.com/2000/04/11/figuring-out-what-they-expected/

From the blog CS@Worcester – ThanhTruong by ttruong9 and used with permission of the author. All other rights reserved by the author.

Path Testing in Software

Hello! Today’s topic of discussion will be path testing in software development. The article that is up for discussion today is “Path Testing: The Coverage” by Jeff Nyman. So let’s get right into it. What exactly is path testing? Path testing is a method of testing that involves traversing through the code in a linear fashion to ensure that the entire program gets test coverage. The point of path testing is to make a graphs that represent your tests. This is done by graphing out your program by nodes, which are used to represent different lines of code/methods/bodies of code. The connections between the nodes represent their linear relationship. If a program graph is made correctly, you should easily be able to identify the flow of the program and things such as loops should be distinctly represented. So what’s the big deal with testing using these program graphs? When we are testing we want to make sure we test all of our code and our relationships between different parts of code. If we use nodes as these different parts of our code, we can test using the program graph layout by making sure that every node in our program graph is traversed during testing, as well as every relationship between the nodes. This is an important concept to ensure that your program gets 100% testing coverage, and to make sure that the different parts of the code work together as they properly should.

I have drafted a few different program graphs for programs in my testing class, and I have to say that they make the objective of the code entirely clear. By objective, I mean I can tell exactly what is supposed to happen in the code and the exact order of execution. Loops are entirely clear in these graphs because the are represented by a line going from one node looping back up to a node above that. If there are 2 nodes in between the first node and the node that loops up to the first node, then I know that nodes 1-4 are a loop of some sort, and that nodes 2 & 3 are some sort of method body that do some action within the loop while node 1 is the beginning of the loop and node 4 is the exit condition node. You can test the edges between the nodes to ensure the different relationships for the nodes are correct as well as test the nodes themselves to ensure they work as desired. I think that program graph testing is a great way to visualize testing and I hope to use it a lot in the future.

Here’s the link: http://testerstories.com/2014/06/path-testing-the-coverage/

 

From the blog CS@Worcester – The Average CS Student by Nathan Posterro and used with permission of the author. All other rights reserved by the author.

Dynamic Test Process

Source: https://www.guru99.com/dynamic-testing.html

This week’s reading is about a dynamic testing tutorial written by Radhika Renamala. It is stated to be a software testing technique where the dynamic behavior is being parsed. An example provided is a simple login page which requires input from the end user for a username and password. When the user enters in either a password or username there is an expected behavior based on the input. By comparing the actual behavior to the expected behavior, you are working with the system to find errors in the code. This article also provides a dynamic testing process in the order of test case design and implementation, test environment setup, test execution, and bug reporting. The first step is simply identifying the features to be tested and deriving test cases and conditions for them. Then setup the tests to be executed, execute the tests then document the findings. By using this method, it can reveal hidden bugs that can’t be found by static testing.

This reading was interesting because I thought it was a simple test process than what is written in this article. Personally, I thought by randomly inputting values and observing the output would be sufficient. However, the simplified steps aren’t as simple as there are necessary considerations. In the article, there is a warning given to the reader by the author that other factors should be considered before jumping into dynamic testing. Two of the most important would be time and resource as they will make or break the efficiency of running these tests. Unlike static testing as I have learned, which is more based around creating tests around the code provided by the user. This allows them to easily create tests that are clearly related to the code. But this does not allow them to think outside the box where dynamic testing allows testers to do so. This type of testing will certainly create abnormal situations that can generate a bug. As stated in the article, this is type of testing is useful for increasing the quality of your product. By reading this article, I can see why the author concluded that using both static and dynamic in conjunction with each other is a good way to properly deliver a quality product.

From the blog CS@Worcester – Progression through Computer Science and Beyond… by Johnny To and used with permission of the author. All other rights reserved by the author.

Stress Testing and a Few Implementations

For this particular post, I was in the mood to cover something that we haven’t specifically covered in the class, stress testing. I found a post on guru99.com that covers stress testing as a whole. The article covers aspects including the need for stress testing, as well as different types of implementations of stress testing. I particularly enjoyed this post because it covers a broad realm of topics within this particular area of testing, while being easily understandable for someone who may have no familiarity with the concept whatsoever.

Stress testing is often associated with websites, and mobile applications that may experience abnormal traffic surges during some predictable times and sometimes completely unpredictable times. Stress testing ensures that a system works properly under intense traffic, as well as displays possible warning messages to alert the appropriate people that the system is under stress. The post points out that the main end goal of stress testing is to ensure that the system recovers properly after failure.

A few different means of stress testing are, application stress testing, systemic stress testing, and exploratory stress testing. Application stress testing is pretty self explanatory, this test basically looks to find any bottlenecks in the application that may be vulnerable under stress. Systemic stress testing tests multiple systems on the same server to find bottlenecks of data blocking between applications. Exploratory stress testing tests for some possible edge cases that are probably unlikely to ever occur, but should still be tested for. The post gives examples of a large number of users logging in at the same time and when a particularly large amount of data is inserted into the database at once.

I knew that this kind of testing had to exist because of my experience with certain applications. Years ago, when a new Call of Duty game would come out, it was often unplayable for the first day or two, due to the network system being completely offline or simply unstable. Now, presumingly they have figured out their stress testing as the service does not go offline on release and hasn’t for several years. Personally, I don’t know if I will particularly be involved with stress testing in my career but the exposure to this area of testing cannot hurt. I do recommend that people take a look at this post on guru99.

Here is a link to the original post: https://www.guru99.com/stress-testing-tutorial.html

 

From the blog CS@Worcester – The Road to Software Engineering by Stephen Burke and used with permission of the author. All other rights reserved by the author.

Why proxy pattern?

You probably heard or saw the term “proxy” multiple times in your browser or your OS, and just like me, you did not know what it means or what does it do. In computer science, that is a term to describe a design pattern and I think it is one of the most interesting design pattern. It is such a simple concept, yet effective and it has been used by a lot of developers, especially in the field of networking.

For starter, according to the Gang of Four, proxy pattern is categorized as Structural design pattern. Its purpose is to act as a simple wrapper for another objects. In other word, the proxy object can be directly accessed by user and it can perform its own logic or configuration changes required by the underlying subject object without having to give the direct access to the subject. By using this pattern, it offers both developers and users the advantages. This pattern is used when commonly used to hide the internal structure and only enable user / client to access a certain part of it, or access control. It can also be used to replace complex or and heavy objects as a skeleton representation. Less popular function of proxy pattern is to prevent data redundancy(ie. load data from disk multiple time) every time the object is called.

The concept of proxy is actually used a lot the real world. Taking the example of a debit card, it is one of the most commonly thing that used this pattern. Every time you swipe or check out with debit card, you are basically telling the system to transfer the money from your bank account to the vendor that you are paying. You can really see that the debit card does not have the trading value, it just represent the amount of money you have in your bank account so that you don’t have to carry that amount around.

This applies the same way in computer science. For instance, if you are working on a project where you have to have multiple server to run on separate subdomain, but unfortunately, you only have one server machine, then reverse proxy is definitely a way to go. In Object Oriented Programming, access an object through another object is also really common and useful to ensure security and convenience.

From the blog #Khoa'sCSBlog by and used with permission of the author. All other rights reserved by the author.

3 biggest roadblocks to continuous testing

https://www.infoworld.com/article/3294197/application-testing/3-biggest-roadblocks-to-continuous-testing.html

Continuous testing is the process of executing automated tests as part of the software delivery pipeline to obtain feedback on the buisness risks associated with a software release candidate as rapidly as possible. Many orgaizations have experimented with this testing automation but many of them only celebrate small amounts of success but the process is never expanded upon and that is due to three factors, time and resources, complexity, and results. Teams notoriously underestimate the amount of time and resources required for continuous testing. Teams often create simple UI tests but do not plan ahead for all of the other issues that pop up such as numerous false positives. It is important to keep individual tests synced with the broader test framework while also creating new tests for every new or modified requirement all while trying to automate more advanced cases and keep them running consistently in the continuous testing enviroment. Second, it is extremely difficult to automate certain tasks that require sophisticated set up. You need to make sure that your testing resources understand how to automate tests across different platforms. You need to have secure and compliant test data in order to set up a realistic test as well as drive the test through a complex series of steps.  The third problem is results. The most cited complain with continuous testing is the overwhelming number of false positives that need to be reviewed and addressed. In addition, it can be difficult to interpret the results becasue they do not provide the risk based insight needed to make a decision on whether or not the tested product is ready to be released. The tests results give you the number of successful tests and the number of unsuccessful tests which you can calculate an accuracy from but there are numerous factors that contribute to those unsuccessful tests along with false positives, ultimately the tests will not help you conclude which part is wrong and what needs to bee fixed. Release decisions need to be made quickly and the unclear results make that more difficult.

Looking at these reasons I find continous testing to be a poor choice when it comes to actually trying to test everything in a system. Continuous testing is more for speeding up the proccess for a company rather than releasing a finished product. In a perfect world, the company would allow for enough time to let the team test thoroughly but when it is a race to release a product before you competition, continous testing may be your only option.

From the blog CS-443 – Timothy Montague Blog by Timothy Montague and used with permission of the author. All other rights reserved by the author.

Python vs R

For this week’s blog I will be taking a look at more of the data analysis side of the computer science world with a comparison of R vs Python. When looking to get into data science, business intelligence, or predictive analytics, we often hear of the two programming languages, but we don’t necessarily know which one to learn or use in different situations.

The R language is a statistical and visualization language that was developed in 1992. R has a rich library that makes it perfect for statistical analysis and analytical work. Python, on the other hand, is a software development language that is based on C. Python can be used to deploy and implement machine learning at a large-scale and can do similar tasks as R. The usability for the two languages is clear that Python is better for data manipulation and repeated tasks while R is better for ad-hoc analysis and general exploration of data sets. Python, being more of a general programming language, is the go to form Machine Learning while R is better at answering statistical problems.

R comes with many different abilities in terms of data visualization, which can be both static or interactive. R packages such as Plotly, Highcharter, and Dygraphs allow the user to interact with the data. Python has libraries such as SciKit-Learn, scipy, numpy, and matplotlib. Matplotlib is the standar Python library that is used to create 2D plots and graphs while numpy is used for scientific computing.

Although R has always been the favorite for data scientists and analysts recently Python has gained major popularity. Over the last few years, Python has risen in popularity by over 10 percent total while the use of R has fallen about 5 percent. Since R is more difficult to learn than Python, the general consensus is that the seasoned data scientist uses R, while the entry-level new generation of data analysts prefer Python extensively.

In the end, Python is a clear better choice for machine learning due to is flexibility, especially if the data analysis tasks need to be integrated with web applications. If you have the need for rapid prototyping and working with datasets to build machine learning models or require statistical analysis of a dataset, R can be used much easier.

https://www.datasciencecentral.com/profiles/blogs/r-vs-python-meta-review-on-usability-popularity-pros-amp-cons

From the blog CS@Worcester – Jarrett's Computer Science Blog by stonecsblog and used with permission of the author. All other rights reserved by the author.

AntiPatterns: Lava Flow

If you’ve ever contributed to Open Source projects (or ever tried to code pretty much ever), the concept of Lava Flow probably isn’t very foreign to you. 

Throughout the course of the development of a piece of software, teams take different approaches to achieve their end goal. Sometimes management changes, sometimes the scope or specification of the project is altered, or sometimes the team simply decides on a different approach. These changes leave behind stray branches of code that may not necessarily align with the intent of the final product. These are what we refer to as “Lava Flows”.

If the current development pathway is a bright, burning, flowing stream of lava, then old, stagnant, dead code hanging around in the program is a hardened, basalt-like run-off that is lingering in the final product. 

Lava Flow adds immense complexity to the program and makes it significantly more difficult to refactor — in fact, that difficulty to refactor is largely what causes lava flow in the first place. Programmers don’t want to touch code that they don’t fully understand because they run the risk of interfering with the functionality of the working product. In a way, the flows grow exponentially because developers create loose work arounds to implement pieces of code that may not even need to be implemented at all. Over time, the flow hardens into a part of the final product, even if it contributes nothing at all to the flowing development path.

So what can we do about this AntiPattern? The first and perhaps most important thing is to make sure that developers take the time to write easy to read, well documented code that is actually relevant to the final project. It is far more important to take a preventative approach to Lava Flow, because refactoring massive chunks of code takes an exorbitant amount of time and money. When large amounts of code are removed, it’s important to understand why any bugs that pop up are happening. Looking for quick solutions without have a full grasp of the problems will just continue to accentuate the original problem.

I found out this AntiPattern through SourceMaking.com’s blog on it, which delves much deeper into the causes, problems, and solutions involved with this it. I strongly recommend checking out that post as well, as it is far more elaborate than mine and gives great real-world examples along with it.

From the blog CS@Worcester – James Blash by jwblash and used with permission of the author. All other rights reserved by the author.

solid, grasp, and other basic principles of object-oriented design

Today’s topic for blog 6 is solid, grasp, and other basic principles of object-oriented design. This article teaches you about basic about solid and grasp languages. First, code should have the following qualities maintainability, extensibility, and modularity. These qualities are usually in code that are easy to maintain, extend, and modularize over its lifetime. This article has many examples written in java and c#. These are the principle that the article goes through:

Single responsibility principle which states that a class should have only one responsibility and a class a fulfills its responsibilities by using its functions.

Open-closed principle which states a software module (class or method) should be open for extension but for modification.

Liskov substitution principle which states derived classes must be substitutable for their base classes. What this basically means is that abstraction should be enough for a client.

Interface segregation principle which states clients should not be forced to depend upon the interfaces that they do not use.

Dependency inversion principle which states program to an interface, not to an implementation.

Hollywood principle which helps prevent dependency rot. It states that a high-level component can dictate low-level components in a manner so that neither one is dependent upon the other.

Polymorphism which they say it is a design principle.

Information expert which helps you to give responsibilities to classes.

Creator which is a grasp principle that helps decide which class should be responsible for instantiating a class.

Pure fabrication which reduces the cohesiveness of the domain classes.

Controller which is a design principle that helps minimizing the dependency between gui components and the domain model classes.

And favor composition over inheritance and indirection.

All these topics were provided examples and it was very easy to understand. I thought this article was very informative. I think it provides the basic main idea of these principles. After reading this article I changed the way I work because I now know how to apply these principles to improve my code. I agree with everything in this article but if I would to improve it, I would say to break up grasp and solid to separate articles. All In all, this article was  very useful in learning the basic about different principles and how to apply them.

https://dzone.com/articles/solid-grasp-and-other-basic-principles-of-object-o

From the blog CS@Worcester – Phan's CS by phancs and used with permission of the author. All other rights reserved by the author.