Category Archives: Week 6

Intro to Layered Pattern

For this post I chose the article “Software Architecture Patterns” which focuses on the layered architecture pattern. I chose this article because up to this point I’d only focused on design patterns so I wanted to shift my direction. After some googling it seems the layered pattern is one of the most common so I thought it’d be a good way to move into software architecture.

At the most basic understanding, the layered pattern consists of components organized into horizontal layers with each layer having a specific role in an application. The most common layers you will find across standard applications include presentation, business, persistence, and database. Each of the layers forms an abstraction around the work that it does. That means for example the presentation layer just needs to be able to display data in a correct format, it doesn’t need to know how to get that data. A useful feature that goes along with this idea is called separation of concerns. The components in a specific layer only deal with logic that pertains to their layer.

One of the key concepts to the layered pattern is having open and closed layers. If a layer is “closed” this means any requests must move to the layer directly below it. An open layer allows a request to bypass that layer and move to the next. The idea of isolated layers decreases dependency in an application and allows you to make a change to one layer without necessarily needing to change all the layers. This makes any refactoring a lot easier to do.

The layered pattern is a good starting pattern for any general application. One thing to avoid when using this pattern is referred to as the sink-hole anti pattern. This is when you have a lot of requests passing through layers with little to no processing. A good rule to keep in mind is the 80/20 rule where only 20% of requests are simple pass throughs. In an overall rating of this pattern, it is great for ease of deployment and testability and not so great for high performance and scalability.

After reading this article I think the layered design is pretty interesting. For applications with sensitive information it seems like this would be a good way to control requests and protect data. I also like the idea that each layer is typically independent of the others. This makes changing code and functionality much easier as you should only need to worry about components in the layer being changed. Moving forward I’m not sure if I will use the layered pattern very soon but it has got me started thinking on how to approach a software project. Before this article I have not given much thought to architecture. I think this article gave me a solid intro in what I can expect in further architecture readings.

From the blog CS@Worcester – Software Development Blog by dcafferky and used with permission of the author. All other rights reserved by the author.

Object Oriented Knowledge Is Not Inherited

Soft qual ass & test   URL: https://sourceforge.net/p/tplanrobot/blog/2017/03/image-based-versus-object-oriented-testing/

From the blog CS@Worcester – BenLag's Blog by benlagblog and used with permission of the author. All other rights reserved by the author.

Record and Playback Advantages and Disadvantages

Record and Playback Advantages and Disadvantages

Since last week blog did not had a lot of information about Record and Replay (or Record and Playback), I did not know whether I should use it for (Graphic User Interface) GUI testing or not. Therefore, I decided that I should learn more about it and its advantages/disadvantages. After reading blogs and articles related to Record and Playback, I chose this particular article because it clearly stated the problems testers could have when using Record and Playback tools and the scenarios when Record and Playback could be useful. Below is the URL of the blog:

https://www.cio.com/article/3077286/application-testing/record-playback-automation-its-a-trap.html

In this article, Troy T. Walsh, a principal consultant at Magenic in St. Louis Park, shared his thought about Record and Playback as a trap that many projects fell into. He provided the disadvantages that these tools had, for example, high maintenance cost, limited test coverage, poor understanding about the tools, poor integration, limit features, high price, locked in. He also gave some scenarios when Record and Playback might be a good option, like learning the underlying automation framework can be leveraged from the code, loading testing, and proving concept.

According to Troy, Record and Playback had limited test coverage. Since it followed the exact steps the testers recorded, it limited to testing against the user interface. Therefore, it made sense when Record and Playback was recommended for GUI testing last week. But for test automation, it did not have great value. He also thought that most testers had an incomplete understanding of what exactly these tools were doing which could lead to huge gaps in the test coverage. In my opinion, this disadvantage could be fixed if the testers studied more about the tools before using them. Furthermore, Record and Playback tools had limited features which are important for test automation like remote execution, parallelization, configuration, data driving and test management integration. Furthermore, to use feature rich options, the users needed to pay a lot of money every year. Beside those disadvantages, Record and Playback could be used to study the underlying automation framework of the code by recording the steps and observing what get generated.

After reading the advantages and disadvantages of Record and Playback, I could see that it was not a good tool for test automation since it was limited in many aspects. It had high price, high maintenance cost, limited features, limited test coverage, etc. However, in my opinion, it was good enough to be a GUI testing tool. Since GUI testing checked whether the expected executions, the Error Messages, the GUI elements layout, the font, the color, etc. were correctly executed or not, the testers only need to “record” the steps that the users would do and “playback” to see the results. Therefore, I would try Record and Playback to test GUI but not for test automation.

From the blog CS@Worcester – Learn More Everyday by ziyuan1582 and used with permission of the author. All other rights reserved by the author.

LEVELS OF SOFTWARE TESTING

LEVELS OF SOFTWARE TESTING

Testing is very important to the development of a successful program. Without testing, there would not be any guarantee that a particular designed code would fulfill it design purpose. There are basically four level f testing namely Unit Testing, Integration Testing, System Testing and Acceptance Testing. I chose to explore and elaborate on these because we have just starting treating those topics in class starting with the Unit Testing. Due to time constraint, I would be describing the four levels of testing in a briefly manner as follows;

Unit Testing: Unit testing is done by programmers on a particular functions or code modules. White-box testing method is used to achieve this task.  Considering a code as a bulk program, unit testing will deal with each pieces of the code that comes together to form the code and make sure that each section of the code passes the test.  In this regards, it is easy to figure out which part of your code have a problem and solutions to non-functioning section of the code can be easily resolved. Sections of code can be testing on the go as they are created rather than wait till the end which might give you hard time figuring problem. Unit testing requires the knowledge of the internal program design and code.

Integration Testing: Integration testing is done after unit testing and it is a testing of combined parts of an application to determine their functional correctness. Unlike the unit testing which test individual pieces of the code, Integration testing gives you the opportunity to gather all the pieces or sections and test them as a group. This will enable you to determine how efficiently all the unit of your code is working together or to technically verify proper interfaces between modules and sub systems.

System Testing: System testing ensures that the system is in line with all the requirements and has meets the quality standards as well the code design purposes. It is the first level of testing the whole code or program to make sure the entire program is working as one unit. System test is often done by an individual who is not part of the developing group and it is very necessary because it ensures the program or code is meets the technical, functional and business requirements that they were tasked to design.

Acceptance Testing: Acceptance testing is related to the user and is designed for the user to test the system to see if it meets their standard. In other words, it is to verify that the system meets the user requirements. In this stage of testing, if nothing changes and the software passes, the program will be delivered to the entity they that need it and the programmers work is done.

It is important for one to note that all these levels of testing are done progressively from unit testing to acceptance testing. I have come to note that I cannot jump to acceptance testing without first doing the unit, integration and system testing. This is going to have a great impact in my career as I have now known and understand clearly how the testing is done.

References: https://www.seguetech.com/the-four-levels-of-software-testing/

 

 

From the blog CS@Worcester – Computer Science Exploration by ioplay and used with permission of the author. All other rights reserved by the author.

QA & TESTING – Episode 1

AB Testing – Episode 1 by Brent Jenson and Allen Page.

In this weeks testing episode, I went back to episode 1 just so I can address topics that Allen and Brent found necessary to start their testing podcast episodes with. Both Allen and Brent are high-end software developers and testers who worked for many big companies and performed many big tasks in the world of software developing and testing. Allen page was a software-testing manager who contributed to many books in the world of software testing. (His books are really good if you wanted any information of software testing). Brent Jenson also worked for Microsoft for over 20 years and accumulated many experience holding the position of software testing Director. They continued to talk about a presentation method used at Microsoft which I thought would benefit the software testing industry should we all decide to utilize it. They called it the Lean coffee lives. As comical as this sounds, lean coffee is a structured, but agenda-less meeting. Participants gather, build an agenda, and begin talking. Conversations are directed and productive because the agenda for the meeting was democratically generated. These agendas are often address to things viewed as highly important and then goes all the way down to items on the list, which is viewed, as less important. So this sounded like something we can bring to our Testing Team meetings!! As quickly as the podcast began, Allen began to dive into real Software testing concepts. He began by emphasizing the great difference that lies between testing and quality. With constant changes and improvising’s, system and program bugs quickly lose values. Finding bugs on constantly or sometimes daily changing software does not constitute to the quality level of the software product. This is because today’s bug can be fixed in tomorrows code implementation and that can also create a new bug that could be fixed with the next program/code modification. Now here is the case that is it the Job of software testers to find bugs and errors in the program. Now it’s the job of a test manager to schedule test runs and passes for the specific product in development. How do you think a test manager could work successfully an environment where code changes and modifications are being made on a daily base? The proposed solution goes back to the beginning of the project where planning and thoughts have to be put in place. To put forth a great product, time allocation for testing has to be incurred in the project timeline. You can put out quality without considering all aspects of possible challenges and inputs.

From the blog CS@Worcester – Le Blog Spot by houtyr and used with permission of the author. All other rights reserved by the author.

The Dangers of Relying on Automated Testing

After listening to Jean Ann Harrison’s discussion about how important critical thinking is in the context of software testing and quality assurance on an episode of Test Talks, I wrote a post about The Limits of Automated Testing. Although Harrison’s explanation was great, I had a few remaining questions and this week chose to look for more information on automated testing. I came across a post by Martin Jansson from March 2017 titled Implication of emphasis on automation in CI, and it seemed to provide me with the more comprehensive view of testing automation that I was looking for.

Jansson starts out on a positive note, stating that he “less frequently see[s] the argumentation that testing is not needed.” To me it is almost comical to think about someone arguing that testing is unnecessary. While I completely understand that managers and executives are enticed by the possibility of saving time and money by not testing software, this is an extremely risky and careless method of creating a product. I doubt that anyone releasing untested software lasts very long or makes any money in the industry.

So if not testing at all is not an option, what are the options? Going with the bare-minimum for testing would be running only automated tests, a method that Jansson says is actually used. I have to agree with Jansson, however, when he says that this is not testing, rather it is simply checking. Instead of exploring parts of the code that are likely to contain bugs, you will simply be checking acceptance criteria. By not exploring the code fully, you are failing to find anything that might be outside the scope of the specification or the requirements. I feel that the following graphic provides an excellent representation of how few tests are actually performed when following a testing strategy that relies solely on automation.

(Source: http://thetesteye.com/blog/2017/03/implication-of-emphasis-on-automation-in-ci/)

What constitutes the perfect blending of automated and manual testing may be impossible to know. What is certain, however, is that automated testing cannot be relied upon as the sole method for testing. Jansson puts it in layman’s terms when he says that “you rarely automate serendipity.” Just as Jean Ann Harrison points out in the Test Talks podcast mentioned earlier, automation is not and will never be a replacement for thought. It is a bit of a relief to know that the software development companies are maturing and beginning to understand the importance of having testers who use a combination of automated and manual testing. As long as there continues to be humans writing code, there will need to be humans who test that code.

From the blog CS@Worcester – ~/GeorgeMatthew/etc by gmatthew and used with permission of the author. All other rights reserved by the author.

Post #7

I began researching good JUnit practices as a follow-up to our discussions of it in class.  I found a post on the codecentric Blog by Tobias Goeschel entitled “Writing Better Tests With JUnit” that addresses the pros and cons of JUnit and provides tips on how to improve your own testing.  This is the most thorough article I’ve found on JUnit testing (and possibly longest), so it seems fitting to summarize it in a blog post of my own while we cover the subject in class.

From the blog CS@Worcester – by Ryan Marcelonis and used with permission of the author. All other rights reserved by the author.

Post #6

Toward the end of our discussion about the Strategy design pattern, we briefly talked about the open/closed principle; I wanted to further my understanding of this concept, so I decided to do some research of my own.  Today, I will summarize an article by Swedish systems architect Joel Abrahamsson entitled “A simple example of the Open/Closed Principle”.

Abrahamsson begins the article by summarizing the open/closed principle as the object oriented design principle that software entities should be open for extension, but closed for modification.  This means that programmers should write code that doesn’t need to be modified when the program specifications change.  He then explains that, when programming in Java, this principle is most often adhered to when implementing polymorphism and inheritance.  We followed this principle in our first assignment of the class, when we refactored the original DuckSimulator program to utilize the Strategy design pattern.  We realized, in our in-class discussion of the DuckSimulator, that adding behaviors to Ducks would force us to update the implementation of the main class as well as each Duck subclass.  By refactoring the code to implement an interface in independent behavior classes – and then applying those behaviors to Ducks in the form of “setters” – we opened the program for extension and left it closed for modification.  Abrahamsson then gives his own example of how the open/closed principle can improve a program that calculates the area of shapes.  The idea is that, if the open/closed principle is not adhered to in the implementation of a program like this, it is susceptible to rapid growth as functionality is added to calculate the area of more and more shapes.

(Note: This is clearly not a Java implementation.)

public double Area(object[] shapes)
{
    double area = 0;
    foreach (var shape in shapes)
    {
        if (shape is Rectangle)
        {
            Rectangle rectangle = (Rectangle) shape;
            area += rectangle.Width*rectangle.Height;
        }
        else
        {
            Circle circle = (Circle)shape;
            area += circle.Radius * circle.Radius * Math.PI;
        }
    }

    return area;
}

( Abrahamsson’s implementation of an area calculator that does not adhere to the open/closed principle. )


public abstract class Shape
{
    public abstract double Area();
}
public class Rectangle : Shape
{
    public double Width { get; set; }
    public double Height { get; set; }
    public override double Area()
    {
        return Width*Height;
    }
}
public class Circle : Shape
{
    public double Radius { get; set; }
    public override double Area()
    {
        return Radius*Radius*Math.PI;
    }
}
public double Area(Shape[] shapes)
{
    double area = 0;
    foreach (var shape in shapes)
    {
        area += shape.Area();
    }

    return area;
}

( Abrahamsson’s implementation of an area calculator that adheres to the open/closed principle. )

Abrahamsson ends the article by sharing his thoughts on when the open/closed principle should be adhered to.  He believes that the primary focus of any good programmer should be to write code well enough that it doesn’t need to be repeatedly modified as the program grows.  Conversely, he says that the context of each situation should be considered because unnecessarily applying the open/closed principle can sometimes lead to an overly complex design.  I have always known that it is probably good practice to write code that is prepared for the requirements of the program to change, and this principle confirmed that idea.  From this point forward, I will take the open/closed principle into consideration when tackling new projects.

 

 

 

From the blog CS@Worcester – by Ryan Marcelonis and used with permission of the author. All other rights reserved by the author.

Predictive Applications and the ‘Datafication’ of Everything

We live in a world where we are constantly being bombarded with information. Not only do we consume insane amounts of data, we are also providing other people and businesses with information about ourselves. Signing up for online mailing lists, ordering magazine subscriptions, and even making dinner reservations, information about our habits and preferences is constantly being left behind, a concept that Charlie Berger refers to as data exhaust in a podcast from October 10, 2017 on Software Engineering Radio. The larger concept that he is describing is what is known as ‘datafication’, a buzz-word in the data science and big data spheres that refers to the collecting and storing information about social actions that can be used to perform predictive analyses and targeted marketing.

Specific to the computer science discipline, datafication has implications on the development of predictive applications. In the podcast episode, Berger presents the simple yet extremely effective example of an ATM machine as lacking in the predictive application sense. Berger wonders why each time that he uses the ATM he is asked which language he would like to use, and why such preferences are not somehow tracked and stored, making for a more seamless and personalized ATM experience. Berger even suggests that the ATM track more than language preferences, offering withdrawal suggestions based on previous transaction data from a similar day of the week or time of the day.

While it may not be terribly inconvenient to have to choose a language each time you use the ATM, the concept of predictive applications and the advantages associated with creating and using these types of applications becomes much more apparent when considering larger-scale operations. Retailers can use predictive applications to make important decisions about things like advertising and merchandising. Berger mentions the well-known “parable of the beer and diapers,” where an interesting and entirely unexpected correlation was found between purchases of diapers and beer. While some versions of the tale include the retailer moving the two correlated items next to one another in order to drive increases in sales, this may or may not be factual. Regardless, such examples of generating useful information based on querying data is a perfect example of the power the predictive applications have.

Berger repeatedly stresses the importance of moving the algorithm to the data, not vice-versa. By moving the algorithm to the data, we avoid all of the dangers of bypassing security and encryption. Developing applications that perform queries and compile information that is usable and useful to not only data scientists, but normal people as well, is a perfect example of how machine learning and predictive applications can make everyones jobs easier.

As a student, I took one of Berger’s closing remarks under careful consideration. Berger states that it is much easier for a programmer to learn how to make a program that interprets data than for a data scientist to translate his specific, one-off analyses into programs. With a newfound understanding of why predictive applications are so important to our data-obsessed society, I look forward to exploring how I can begin developing applications that take advantage of machine learning.

From the blog CS@Worcester – ~/GeorgeMatthew/etc by gmatthew and used with permission of the author. All other rights reserved by the author.

Software Design Patterns

Software Design Patterns

Depending on the type of project you are designing, there will always be a need to adopt a particular pattern that clearly represents your design. There are many design patterns that a programmer can rely on to present his or her project. For the purpose of studies, I will be exploring and discussion the three most useful design patterns categories. Thus creational design pattern, structural design pattern and Behavioral design pattern. Below is a brief description of each pattern.

Creational Design pattern: This type of design create object as needed within a code and allows for abstract factory which group objects of similar functionality. This also uses polymorphism to allow one object to inherit multiple behaviors within a method. In this pattern, you do not need to declare the exact class or type because polymorphism is used at the end to assign behavior. Usually, an abstract prototype is created and the base classes that inherit it are defined.

Structural Design pattern: Structural patterns deals largely with composition of classes and object by using inheritance and interfaces to enable objects to gives out new functionality. It often has an abstract class which uses method signatures and behaviors for the classes that will implement the interfaces. In structural patterns, objects are group according to their behavior and what they inherit. Also, modifications of object are done before finalizing your code.

Behavioral Design pattern: This is a type of pattern that allows for the behavior of a class to change base on it current state; even though, these states are always changed in the entire programming, implementation of each state is defined by a unique interface. It also allows for new operations to be added to an object without having to modify its original implementation structure.

In general, when you are talking about code responsibility, you really want to have your methods to your classes do one thing and do it well. Bringing it to real life example, it is like texting and driving; there is no way to achieve both effectively at the same time. You are either going to be driving off the road while texting well or driving well and not texting regularly. This implies, having your code do two or more things will make it do one thing well and do badly with the rest. Also, changing one thing on base class might have side effect on the other too. In this case, as a designer, you will want to extend your classes rather than modifying them. Even though you will have more classes created in order to have them do one thing, it is worth doing that because it will provide a clear presentation of your code. This is going to help me a lot in the future because I am going to adopt having classes of my code perform one task which will provides higher efficiency.

 

References:

https://airbrake.io/blog/design-patterns/software-design-patterns-guide

From the blog CS@Worcester – Computer Science Exploration by ioplay and used with permission of the author. All other rights reserved by the author.