Category Archives: Week 6

Blog #4 – Design Patterns in Real Life

This blog that I chose to write about this week was a blog post titled “Design Patterns in Real Life” where the author Karthik Kumar creates his own examples of design patterns modeled after real-world situations to help him and the reader understand design patterns better.

http://karthikkumar.me/design-patterns

I chose this blog because I think it is important for those learning new programming concepts to get examples that relate to the real world because it helps them understand these concepts a lot faster.

Kumar begins his blog by explaining how after reading a popular book “Design Patterns”, he noticed that many of the examples used in the book were hard to relate to and the “Known Uses” of each chapter were outdated. To help understand the design patterns better, he created his own examples for each design pattern. In this blog, he explains patterns with his tangible, real-life examples.

Kumar starts off by explaining “Creational Patterns” which deals with how objects are created. They control how objects are built and can have a huge impact on improving the design of the system, it can also allow a system to be independent of how its objects are created.

He then goes into detail of a creational pattern called the Builder Pattern. This pattern separates how an object is constructed from its actual representation. It allows us to use the same general construction process to create different representations.

The real-world example Kumar uses for the Builder Pattern is the car manufacturing industry. In this example, he explains how when building a certain car according to specification, the manufacturer chooses several options about the car. This could include the color, engine, and any additional features. In this example, the client interacts directly with the builder to manufacture cars. The advantage of using the builder pattern is that we can construct cars with different characteristics using the same builder, and by specifying different parameters to the builder.

Kumar then explains Structural Patterns with deal with how classes and objects are built and composed as part of a larger system. He continues with the structural pattern the “Facade Pattern” which provides a single unified interface that encompasses several interfaces in a subsystem. It acts as a funnel to expose a single interface to many clients, and hides the actual subsystem that’s responsible for doing the work requested.

The example Kumar uses for the Facade pattern is Amazon’s 1-click ordering system. When ordering items, a customer is presented with a simple interface to purchase an item, but there are several complex processes running to enable an item to be purchased, and the user doesn’t see that.

Kumar takes these design patterns that may be hard for us to understand and uses everyday real-world examples to make understanding each design pattern easier, which I really like. I hadn’t looked at these design patterns in depth before and this blog helped me understand them quickly thanks to the examples provided.

From the blog CS@Worcester – Decode My Life by decodemylifeblog and used with permission of the author. All other rights reserved by the author.

Leaving a trail…

Source: https://blog.codinghorror.com/if-it-isnt-documented-it-doesnt-exist/

“If It Isn’t Documented, It Doesn’t Exist” are probably some words to live by, written by Jeff Atwood. Jeff expresses his thoughts on proper documentation based on his personal experience while working on open source projects. This can be summarized into a single sentence “Good Documentation is hard to find” as stated in the blog. However, he agrees on a couple of key points made by James Bennett who wrote the blog post Choosing a JavaScript Library. These can be summarized to having a proper overview of each section of your project/design, having proper examples of usage when needed, documentation on everything, and your regular comments throughout the code itself. Although it was specifically written for his javascript explanation, I attempted to apply it to regular java coding as well. Ultimately leading up to another great statement “most treat documentation like an afterthought” made by Mr. Bennett.

Truthfully, upon finishing and reviewing my own code for Assignment 1 in my CS-343 class, I realized that documentation or comments within the code is non-existent. Leading up to relearning the importance of proper documentation, as once taught in my CS-140 class. Currently, I can see that it is not a requirement and has not been a big factor in assignments even in other previous CS courses I have taken between CS-140 and CS-343. However, it’s important to remember that a properly documented project can be easily benefit yourself and others in many ways. For example, it can be used to assist in explanations or allow others to understand what is done at a certain point. Also, it can be used to help yourself pick up where you left off after reading up on what you have written. Simply stated as “…if you’re the only one who understands it, it doesn’t do any good” by Nicholas Zakas.

Practicing proper documentation techniques early on will help develop proper skills for determining how much is necessary to document. Too much unnecessary documentation will hurt more than documenting only what is needed, which is a problem I had when I actually applied documentation within projects. This also includes being able to understand when and where to use comments, javadocs, etc. throughout a project. Currently, I do treat documentation like an afterthought, to the point that it isn’t applied. In the future, I hope to apply this skill and use it to my advantage, not only for myself but for others as well.

From the blog CS@Worcester – Progression through Computer Science and Beyond… by Johnny To and used with permission of the author. All other rights reserved by the author.

7 Tips for Writing Better Unit Tests in Java

This week i chose a blog on unit testing which can be found here: 7-tips-writing-unit-tests-java. I chose this article because we are starting to look at J unit testing in the class and this article seems to give a good overview of some of the fundamentals on unit testing.  This article is designed to help you write better unit tests, and it does this by recommending 7 tips.

The first tip in the article is to use a framework for unit testing. The two frameworks that the article talks about are the two most popular, which are JUnit and TestNG. These frameworks make it much easier to set up and run tests, and makes tests easier to work with by allowing you to group tests, set parameters and many more options. These frameworks also support automated test execution by integrating with build tools like Maven and Gradle. They close this tip out by talking about another add on to Junit or TestNG which is called EasyMock which allows you to create mock objects to facilitate testing.

The second tip is Test Driven development, this is a process in which tests are written based on the requirements before any coding begins. The test will initially fail, the minimum amount of code is written to pass the test and the code is refactored until it is finally optimized. TDD leads to simple modular code that is easy to maintain, and speeds up the development time.  TDD however is not suited for very complicated design or applications that work with databases or GUI’s. The third tip is Measuring code coverage, generally the higher the percent of code that is covered the less likely you are to miss bugs. The article mentions some tools like Clover, Corbetura, JaCoCo, or Sonar which point out areas of code that are untested. This can help you develop tests to cover these areas. High code coverage does not however insure that the tests are perfectly working.

The fourth tip in the article is to Externalize the test data wherever possible. This is done so that test cases can be run for different date sets without having to change the source code. The article also gives code examples of how you would do this in both Junit and TestNG. The fifth tip is to use assertions instead of print statements. Assertions automatically indicate test results, while print statements can make code very cluttered and require manual intervention by the developer to verify the output printed. The article compares two test cases, one with a print statement and one with an assertEquals, the print statement test case will always pass because the result needs to be verified. The assert version will fail if the method returns a wrong result and does not require developer intervention.

The sixth tip is Build tests that have deterministic results. This tip talks about how not all methods have a deterministic result. The article gives an example of code for a very complex function in which a method calculates the time required for executing the complex function. In this case it would not make sense to test this because the output is variable. The seventh and final tip is to Test negative scenarios and borderline cases, in addition to positive scenarios. This includes testing both valid and invalid inputs , and inputs that are borderline as well as the extreme values of the inputs. This is similar to the testing that we have done in class such as Robust worst – case testing.

I chose this article because we are starting to look at unit testing and specifically JUnit testing and i thought it would be interesting to look at some of the basics of unit testing to familiarize myself with it. Some of the parts that stood out to me are the part on Test driven development and code coverage. I like the Test driven development because it seems like it would fit well into Object oriented design and allow for some sleek coding. As for the code coverage area i like how it included multiple plugins that allow you to test code coverage, these are programs that i have never used before and which seem like they would be of great help in both testing coding coverage and figuring out which sections of code you need to test next.  The last couple tips seem to be very similar to the type of testing we have been doing in class which gave some insight on how the types of testing we have been using can be used in Unit testing. The last thing that i like about the article was the code examples of tests that were included for both Junit and TestNG, which gave some insight on how certain tests could be run and also one some of the differences between Junit and TestNG.

In conclusion i enjoyed the article even though it was quite simple at times. It gave a good insight on Unit testing and the different tools that can be used to write better tests and to make your life as a tester much easier and enjoyable. My one complain is that the article does not go into very much detail in some of the sections. Specifically the sections on deterministic results and the section on assertions, i felt that both of these sections could of benefited from including greater detail. Both of these sections were not very long and only have one example, additional examples and greater detail would help get the ideas across much clearer.

 

 

 

 

 

From the blog CS@Worcester – Dhimitris CS Blog by dnatsis and used with permission of the author. All other rights reserved by the author.

CS@Worcester – Fun in Function 2017-10-23 22:19:09

The article referenced in this blog post can be found here.

This past week I found an article which put forward an unconventional idea: unit testing smells. I picked this article because applying the concept of code smells to test code was intriguing to me. The idea is that certain things that can happen in the course of writing and running your test code can inform you that something is not quite right with your production code. They aren’t bugs or test failures, but, like all code smells, indicators of poor design which could lead to difficulties down the line.

Firstly, the author suggests that having a very difficult time writing tests could signify that you haven’t written testable code. He explains that most of the time, it’s an indicator of high coupling. This can be a problem with novice testers especially, as they’ll often assume the problem is with them rather than the code they’re attempting to write tests for.

If you’re writing tests well enough but doing elaborately difficult things to get at the code you’re trying to test, it’s another testing smell. The author writes that this is likely the result of writing an iceberg class, which was a new term for me. Essentially, too much is encapsulated in one class, which leads to the necessity of mechanisms like reflection schemes to get to internal methods you’re trying to test. Instead, these methods should probably be public in a separate class.

Tests taking a long time to run is a smell. It could mean that you’re doing something other than unit testing, like accessing a database or writing a file, or you could have found an inefficient part of the production code that needs to be optimized.

A particularly insidious test smell is intermittent test failure. This test passes over and over again, but every once in a while, it will fail when given the exact same input as always. This tells you nothing definitive about what’s going on, which is a real problem when you’re performing tests specifically to get a definitive answer about whether your code is working as intended. If you generate a random number somewhere in the production code, it could be that the test is failing for some specific number. It could be a problem with the test you wrote. It could be that you don’t actually understand the behavior of the production code. This kind of smell is a hassle to address, but it’s absolutely crucial to figure out.

Having read this, I won’t just pay attention to whether my tests yield the expected result, but will look out for these signs of design flaws in the code being tested.

From the blog CS@Worcester – Fun in Function by funinfunction and used with permission of the author. All other rights reserved by the author.

What makes frameworks so cool?

This week, I decided to tackle the idea of frameworks. I personally have messed with Bootstrap, Spring, and Node/Express. Even with some experience tinkering around in these frameworks, I still did not quite comprehend why they are such a required skill to develop in their respective languages.  I chose this article because everywhere you look in the software development world, everything is about the latest framework. This can be from blog posts, tech articles, and most importantly, job postings. Everyone is expected to know a major framework for the language that is listed as a required skill. This article I found on InfoWorld, tackles what makes frameworks so powerful and why they are the foundation of the future of software development.

Probably the biggest point that this article is trying to make is that syntax does not really matter anymore. One of the secondary points to back this is up is the idea that architecture should be the focus instead of the minute details of the syntax of a language. The focus should be on how to utilize existing libraries/frameworks by reading the documentation and figuring out the little details as you go. Personally, when I first started writing code, I focused excessively on the syntax of Java instead of understanding data structures themselves. This is a good example because most of the data structures we use in practice are part of the Collections framework within Java. A strong understanding of this framework has helped me write better code more efficiently.

Another secondary point that the article makes to back up the idea that syntax is dying is the growing area of visual languages. This was completely new to me, as I would not really consider visual languages to be part of the software development process. It is hard to ignore the growth in products like SquareSpace, Wix, and tools like AndroidBuilder. While Wix and SquareSpace are not exactly what the article is referring to, I feel that it is important to consider these tools regarding visual languages. These tools alleviate the need for developers for small business owners who only need simple websites/web applications. I’m not too familiar with AndroidBuilder, but from the article, I can gather that this is more of a tool for a developer to manipulate. I do agree with the article that while visual languages will continue to grow, they will never replace the traditional means of creating applications. This does however, mean that they diminish some of the need for learning nitty-gritty syntax.

These are just a couple of the seven reasons that the author feels that frameworks are becoming the new programming languages. I hope to use the core ideas of this article when tackling frameworks, including Angular.js which we will be working with shortly. Considering my minimal experience writing Javascript applications, this will be necessary if I want to be successful for my final project. Hopefully I can translate my new knowledge into productivity.

 

Here is the original article: https://www.infoworld.com/article/2902242/application-development/7-reasons-why-frameworks-are-the-new-programming-languages.html

From the blog CS@Worcester – Learning Software Development by sburke4747 and used with permission of the author. All other rights reserved by the author.

Black Box vs. White Box vs. Grey Box

For this post I chose an article called “Black box, grey box, white box testing: what differences?” I chose this article because grey box is something I haven’t seen explained and I thought it would be a good idea to get the concepts of all three types explained to use as a reference down the road.

The first type explained is black box testing. This is described as testing having a user profile. You are testing for functionality and that a system does what it is supposed to do but not how to do it. In other words, the internals or code of a system is irrelevant to your tests. The priority is testing user paths and that all the system behaves correctly on each path. Some benefits of black box testing are that the tests are usually simple to create which also makes them quicker to create. Drawbacks include missing vulnerabilities in underlying code as well as redundancy if there is already other testing being done.

The next type of testing is white box. This would be testing having a developer profile. You have access to a systems internal processes and code and it’s important to understand that code. Things white box testing is aimed at checking is data flow, handling of errors/exceptions, and resource dependencies. Advantages of white box testing include optimizing a system and complete or near to complete code coverage. Disadvantages include complexity, takes a lot of time, and it can get expensive.

The last type of testing is grey box testing. As the name suggests it is a mixture of both black and white box testing. The tester will be checking for functionality with some knowledge of the internal system however still does not have access to source code. One advantage of grey box testing is impartiality, basically a line still exists between tester and developer role. Another advantage is more intelligent testing. By knowing the some of underlying system you can target your testing to better cover the functionality. The main disadvantage that still exists is the lack of source code access. Without this you cannot provide complete coverage of testing.

After reading the article it seems going with only one of these types of testing would never really be enough. I would argue that white box testing seems to be the most important. Being able to actually test a system internally and cover your code is extremely important. Without access to the code, a functionality that fails testing is almost useless as it could be many things that caused it to fail. I feel like the description of grey box testing is a little vague. While the tester may not have access to the source code, I’m unsure as to how much they actually know. In conclusion this was a good refresher on black and white box testing as well as a good intro to grey box testing.

 

 

 

From the blog CS@Worcester – Software Development Blog by dcafferky and used with permission of the author. All other rights reserved by the author.

Intro to Layered Pattern

For this post I chose the article “Software Architecture Patterns” which focuses on the layered architecture pattern. I chose this article because up to this point I’d only focused on design patterns so I wanted to shift my direction. After some googling it seems the layered pattern is one of the most common so I thought it’d be a good way to move into software architecture.

At the most basic understanding, the layered pattern consists of components organized into horizontal layers with each layer having a specific role in an application. The most common layers you will find across standard applications include presentation, business, persistence, and database. Each of the layers forms an abstraction around the work that it does. That means for example the presentation layer just needs to be able to display data in a correct format, it doesn’t need to know how to get that data. A useful feature that goes along with this idea is called separation of concerns. The components in a specific layer only deal with logic that pertains to their layer.

One of the key concepts to the layered pattern is having open and closed layers. If a layer is “closed” this means any requests must move to the layer directly below it. An open layer allows a request to bypass that layer and move to the next. The idea of isolated layers decreases dependency in an application and allows you to make a change to one layer without necessarily needing to change all the layers. This makes any refactoring a lot easier to do.

The layered pattern is a good starting pattern for any general application. One thing to avoid when using this pattern is referred to as the sink-hole anti pattern. This is when you have a lot of requests passing through layers with little to no processing. A good rule to keep in mind is the 80/20 rule where only 20% of requests are simple pass throughs. In an overall rating of this pattern, it is great for ease of deployment and testability and not so great for high performance and scalability.

After reading this article I think the layered design is pretty interesting. For applications with sensitive information it seems like this would be a good way to control requests and protect data. I also like the idea that each layer is typically independent of the others. This makes changing code and functionality much easier as you should only need to worry about components in the layer being changed. Moving forward I’m not sure if I will use the layered pattern very soon but it has got me started thinking on how to approach a software project. Before this article I have not given much thought to architecture. I think this article gave me a solid intro in what I can expect in further architecture readings.

From the blog CS@Worcester – Software Development Blog by dcafferky and used with permission of the author. All other rights reserved by the author.

Object Oriented Knowledge Is Not Inherited

Soft qual ass & test   URL: https://sourceforge.net/p/tplanrobot/blog/2017/03/image-based-versus-object-oriented-testing/

From the blog CS@Worcester – BenLag's Blog by benlagblog and used with permission of the author. All other rights reserved by the author.

Record and Playback Advantages and Disadvantages

Record and Playback Advantages and Disadvantages

Since last week blog did not had a lot of information about Record and Replay (or Record and Playback), I did not know whether I should use it for (Graphic User Interface) GUI testing or not. Therefore, I decided that I should learn more about it and its advantages/disadvantages. After reading blogs and articles related to Record and Playback, I chose this particular article because it clearly stated the problems testers could have when using Record and Playback tools and the scenarios when Record and Playback could be useful. Below is the URL of the blog:

https://www.cio.com/article/3077286/application-testing/record-playback-automation-its-a-trap.html

In this article, Troy T. Walsh, a principal consultant at Magenic in St. Louis Park, shared his thought about Record and Playback as a trap that many projects fell into. He provided the disadvantages that these tools had, for example, high maintenance cost, limited test coverage, poor understanding about the tools, poor integration, limit features, high price, locked in. He also gave some scenarios when Record and Playback might be a good option, like learning the underlying automation framework can be leveraged from the code, loading testing, and proving concept.

According to Troy, Record and Playback had limited test coverage. Since it followed the exact steps the testers recorded, it limited to testing against the user interface. Therefore, it made sense when Record and Playback was recommended for GUI testing last week. But for test automation, it did not have great value. He also thought that most testers had an incomplete understanding of what exactly these tools were doing which could lead to huge gaps in the test coverage. In my opinion, this disadvantage could be fixed if the testers studied more about the tools before using them. Furthermore, Record and Playback tools had limited features which are important for test automation like remote execution, parallelization, configuration, data driving and test management integration. Furthermore, to use feature rich options, the users needed to pay a lot of money every year. Beside those disadvantages, Record and Playback could be used to study the underlying automation framework of the code by recording the steps and observing what get generated.

After reading the advantages and disadvantages of Record and Playback, I could see that it was not a good tool for test automation since it was limited in many aspects. It had high price, high maintenance cost, limited features, limited test coverage, etc. However, in my opinion, it was good enough to be a GUI testing tool. Since GUI testing checked whether the expected executions, the Error Messages, the GUI elements layout, the font, the color, etc. were correctly executed or not, the testers only need to “record” the steps that the users would do and “playback” to see the results. Therefore, I would try Record and Playback to test GUI but not for test automation.

From the blog CS@Worcester – Learn More Everyday by ziyuan1582 and used with permission of the author. All other rights reserved by the author.

LEVELS OF SOFTWARE TESTING

LEVELS OF SOFTWARE TESTING

Testing is very important to the development of a successful program. Without testing, there would not be any guarantee that a particular designed code would fulfill it design purpose. There are basically four level f testing namely Unit Testing, Integration Testing, System Testing and Acceptance Testing. I chose to explore and elaborate on these because we have just starting treating those topics in class starting with the Unit Testing. Due to time constraint, I would be describing the four levels of testing in a briefly manner as follows;

Unit Testing: Unit testing is done by programmers on a particular functions or code modules. White-box testing method is used to achieve this task.  Considering a code as a bulk program, unit testing will deal with each pieces of the code that comes together to form the code and make sure that each section of the code passes the test.  In this regards, it is easy to figure out which part of your code have a problem and solutions to non-functioning section of the code can be easily resolved. Sections of code can be testing on the go as they are created rather than wait till the end which might give you hard time figuring problem. Unit testing requires the knowledge of the internal program design and code.

Integration Testing: Integration testing is done after unit testing and it is a testing of combined parts of an application to determine their functional correctness. Unlike the unit testing which test individual pieces of the code, Integration testing gives you the opportunity to gather all the pieces or sections and test them as a group. This will enable you to determine how efficiently all the unit of your code is working together or to technically verify proper interfaces between modules and sub systems.

System Testing: System testing ensures that the system is in line with all the requirements and has meets the quality standards as well the code design purposes. It is the first level of testing the whole code or program to make sure the entire program is working as one unit. System test is often done by an individual who is not part of the developing group and it is very necessary because it ensures the program or code is meets the technical, functional and business requirements that they were tasked to design.

Acceptance Testing: Acceptance testing is related to the user and is designed for the user to test the system to see if it meets their standard. In other words, it is to verify that the system meets the user requirements. In this stage of testing, if nothing changes and the software passes, the program will be delivered to the entity they that need it and the programmers work is done.

It is important for one to note that all these levels of testing are done progressively from unit testing to acceptance testing. I have come to note that I cannot jump to acceptance testing without first doing the unit, integration and system testing. This is going to have a great impact in my career as I have now known and understand clearly how the testing is done.

References: https://www.seguetech.com/the-four-levels-of-software-testing/

 

 

From the blog CS@Worcester – Computer Science Exploration by ioplay and used with permission of the author. All other rights reserved by the author.