Category Archives: Week 6

CS@Worcester – Fun in Function 2017-10-23 22:19:09

The article referenced in this blog post can be found here.

This past week I found an article which put forward an unconventional idea: unit testing smells. I picked this article because applying the concept of code smells to test code was intriguing to me. The idea is that certain things that can happen in the course of writing and running your test code can inform you that something is not quite right with your production code. They aren’t bugs or test failures, but, like all code smells, indicators of poor design which could lead to difficulties down the line.

Firstly, the author suggests that having a very difficult time writing tests could signify that you haven’t written testable code. He explains that most of the time, it’s an indicator of high coupling. This can be a problem with novice testers especially, as they’ll often assume the problem is with them rather than the code they’re attempting to write tests for.

If you’re writing tests well enough but doing elaborately difficult things to get at the code you’re trying to test, it’s another testing smell. The author writes that this is likely the result of writing an iceberg class, which was a new term for me. Essentially, too much is encapsulated in one class, which leads to the necessity of mechanisms like reflection schemes to get to internal methods you’re trying to test. Instead, these methods should probably be public in a separate class.

Tests taking a long time to run is a smell. It could mean that you’re doing something other than unit testing, like accessing a database or writing a file, or you could have found an inefficient part of the production code that needs to be optimized.

A particularly insidious test smell is intermittent test failure. This test passes over and over again, but every once in a while, it will fail when given the exact same input as always. This tells you nothing definitive about what’s going on, which is a real problem when you’re performing tests specifically to get a definitive answer about whether your code is working as intended. If you generate a random number somewhere in the production code, it could be that the test is failing for some specific number. It could be a problem with the test you wrote. It could be that you don’t actually understand the behavior of the production code. This kind of smell is a hassle to address, but it’s absolutely crucial to figure out.

Having read this, I won’t just pay attention to whether my tests yield the expected result, but will look out for these signs of design flaws in the code being tested.

From the blog CS@Worcester – Fun in Function by funinfunction and used with permission of the author. All other rights reserved by the author.

What makes frameworks so cool?

This week, I decided to tackle the idea of frameworks. I personally have messed with Bootstrap, Spring, and Node/Express. Even with some experience tinkering around in these frameworks, I still did not quite comprehend why they are such a required skill to develop in their respective languages.  I chose this article because everywhere you look in the software development world, everything is about the latest framework. This can be from blog posts, tech articles, and most importantly, job postings. Everyone is expected to know a major framework for the language that is listed as a required skill. This article I found on InfoWorld, tackles what makes frameworks so powerful and why they are the foundation of the future of software development.

Probably the biggest point that this article is trying to make is that syntax does not really matter anymore. One of the secondary points to back this is up is the idea that architecture should be the focus instead of the minute details of the syntax of a language. The focus should be on how to utilize existing libraries/frameworks by reading the documentation and figuring out the little details as you go. Personally, when I first started writing code, I focused excessively on the syntax of Java instead of understanding data structures themselves. This is a good example because most of the data structures we use in practice are part of the Collections framework within Java. A strong understanding of this framework has helped me write better code more efficiently.

Another secondary point that the article makes to back up the idea that syntax is dying is the growing area of visual languages. This was completely new to me, as I would not really consider visual languages to be part of the software development process. It is hard to ignore the growth in products like SquareSpace, Wix, and tools like AndroidBuilder. While Wix and SquareSpace are not exactly what the article is referring to, I feel that it is important to consider these tools regarding visual languages. These tools alleviate the need for developers for small business owners who only need simple websites/web applications. I’m not too familiar with AndroidBuilder, but from the article, I can gather that this is more of a tool for a developer to manipulate. I do agree with the article that while visual languages will continue to grow, they will never replace the traditional means of creating applications. This does however, mean that they diminish some of the need for learning nitty-gritty syntax.

These are just a couple of the seven reasons that the author feels that frameworks are becoming the new programming languages. I hope to use the core ideas of this article when tackling frameworks, including Angular.js which we will be working with shortly. Considering my minimal experience writing Javascript applications, this will be necessary if I want to be successful for my final project. Hopefully I can translate my new knowledge into productivity.

 

Here is the original article: https://www.infoworld.com/article/2902242/application-development/7-reasons-why-frameworks-are-the-new-programming-languages.html

From the blog CS@Worcester – Learning Software Development by sburke4747 and used with permission of the author. All other rights reserved by the author.

Black Box vs. White Box vs. Grey Box

For this post I chose an article called “Black box, grey box, white box testing: what differences?” I chose this article because grey box is something I haven’t seen explained and I thought it would be a good idea to get the concepts of all three types explained to use as a reference down the road.

The first type explained is black box testing. This is described as testing having a user profile. You are testing for functionality and that a system does what it is supposed to do but not how to do it. In other words, the internals or code of a system is irrelevant to your tests. The priority is testing user paths and that all the system behaves correctly on each path. Some benefits of black box testing are that the tests are usually simple to create which also makes them quicker to create. Drawbacks include missing vulnerabilities in underlying code as well as redundancy if there is already other testing being done.

The next type of testing is white box. This would be testing having a developer profile. You have access to a systems internal processes and code and it’s important to understand that code. Things white box testing is aimed at checking is data flow, handling of errors/exceptions, and resource dependencies. Advantages of white box testing include optimizing a system and complete or near to complete code coverage. Disadvantages include complexity, takes a lot of time, and it can get expensive.

The last type of testing is grey box testing. As the name suggests it is a mixture of both black and white box testing. The tester will be checking for functionality with some knowledge of the internal system however still does not have access to source code. One advantage of grey box testing is impartiality, basically a line still exists between tester and developer role. Another advantage is more intelligent testing. By knowing the some of underlying system you can target your testing to better cover the functionality. The main disadvantage that still exists is the lack of source code access. Without this you cannot provide complete coverage of testing.

After reading the article it seems going with only one of these types of testing would never really be enough. I would argue that white box testing seems to be the most important. Being able to actually test a system internally and cover your code is extremely important. Without access to the code, a functionality that fails testing is almost useless as it could be many things that caused it to fail. I feel like the description of grey box testing is a little vague. While the tester may not have access to the source code, I’m unsure as to how much they actually know. In conclusion this was a good refresher on black and white box testing as well as a good intro to grey box testing.

 

 

 

From the blog CS@Worcester – Software Development Blog by dcafferky and used with permission of the author. All other rights reserved by the author.

Intro to Layered Pattern

For this post I chose the article “Software Architecture Patterns” which focuses on the layered architecture pattern. I chose this article because up to this point I’d only focused on design patterns so I wanted to shift my direction. After some googling it seems the layered pattern is one of the most common so I thought it’d be a good way to move into software architecture.

At the most basic understanding, the layered pattern consists of components organized into horizontal layers with each layer having a specific role in an application. The most common layers you will find across standard applications include presentation, business, persistence, and database. Each of the layers forms an abstraction around the work that it does. That means for example the presentation layer just needs to be able to display data in a correct format, it doesn’t need to know how to get that data. A useful feature that goes along with this idea is called separation of concerns. The components in a specific layer only deal with logic that pertains to their layer.

One of the key concepts to the layered pattern is having open and closed layers. If a layer is “closed” this means any requests must move to the layer directly below it. An open layer allows a request to bypass that layer and move to the next. The idea of isolated layers decreases dependency in an application and allows you to make a change to one layer without necessarily needing to change all the layers. This makes any refactoring a lot easier to do.

The layered pattern is a good starting pattern for any general application. One thing to avoid when using this pattern is referred to as the sink-hole anti pattern. This is when you have a lot of requests passing through layers with little to no processing. A good rule to keep in mind is the 80/20 rule where only 20% of requests are simple pass throughs. In an overall rating of this pattern, it is great for ease of deployment and testability and not so great for high performance and scalability.

After reading this article I think the layered design is pretty interesting. For applications with sensitive information it seems like this would be a good way to control requests and protect data. I also like the idea that each layer is typically independent of the others. This makes changing code and functionality much easier as you should only need to worry about components in the layer being changed. Moving forward I’m not sure if I will use the layered pattern very soon but it has got me started thinking on how to approach a software project. Before this article I have not given much thought to architecture. I think this article gave me a solid intro in what I can expect in further architecture readings.

From the blog CS@Worcester – Software Development Blog by dcafferky and used with permission of the author. All other rights reserved by the author.

Object Oriented Knowledge Is Not Inherited

Soft qual ass & test   URL: https://sourceforge.net/p/tplanrobot/blog/2017/03/image-based-versus-object-oriented-testing/

From the blog CS@Worcester – BenLag's Blog by benlagblog and used with permission of the author. All other rights reserved by the author.

Record and Playback Advantages and Disadvantages

Record and Playback Advantages and Disadvantages

Since last week blog did not had a lot of information about Record and Replay (or Record and Playback), I did not know whether I should use it for (Graphic User Interface) GUI testing or not. Therefore, I decided that I should learn more about it and its advantages/disadvantages. After reading blogs and articles related to Record and Playback, I chose this particular article because it clearly stated the problems testers could have when using Record and Playback tools and the scenarios when Record and Playback could be useful. Below is the URL of the blog:

https://www.cio.com/article/3077286/application-testing/record-playback-automation-its-a-trap.html

In this article, Troy T. Walsh, a principal consultant at Magenic in St. Louis Park, shared his thought about Record and Playback as a trap that many projects fell into. He provided the disadvantages that these tools had, for example, high maintenance cost, limited test coverage, poor understanding about the tools, poor integration, limit features, high price, locked in. He also gave some scenarios when Record and Playback might be a good option, like learning the underlying automation framework can be leveraged from the code, loading testing, and proving concept.

According to Troy, Record and Playback had limited test coverage. Since it followed the exact steps the testers recorded, it limited to testing against the user interface. Therefore, it made sense when Record and Playback was recommended for GUI testing last week. But for test automation, it did not have great value. He also thought that most testers had an incomplete understanding of what exactly these tools were doing which could lead to huge gaps in the test coverage. In my opinion, this disadvantage could be fixed if the testers studied more about the tools before using them. Furthermore, Record and Playback tools had limited features which are important for test automation like remote execution, parallelization, configuration, data driving and test management integration. Furthermore, to use feature rich options, the users needed to pay a lot of money every year. Beside those disadvantages, Record and Playback could be used to study the underlying automation framework of the code by recording the steps and observing what get generated.

After reading the advantages and disadvantages of Record and Playback, I could see that it was not a good tool for test automation since it was limited in many aspects. It had high price, high maintenance cost, limited features, limited test coverage, etc. However, in my opinion, it was good enough to be a GUI testing tool. Since GUI testing checked whether the expected executions, the Error Messages, the GUI elements layout, the font, the color, etc. were correctly executed or not, the testers only need to “record” the steps that the users would do and “playback” to see the results. Therefore, I would try Record and Playback to test GUI but not for test automation.

From the blog CS@Worcester – Learn More Everyday by ziyuan1582 and used with permission of the author. All other rights reserved by the author.

LEVELS OF SOFTWARE TESTING

LEVELS OF SOFTWARE TESTING

Testing is very important to the development of a successful program. Without testing, there would not be any guarantee that a particular designed code would fulfill it design purpose. There are basically four level f testing namely Unit Testing, Integration Testing, System Testing and Acceptance Testing. I chose to explore and elaborate on these because we have just starting treating those topics in class starting with the Unit Testing. Due to time constraint, I would be describing the four levels of testing in a briefly manner as follows;

Unit Testing: Unit testing is done by programmers on a particular functions or code modules. White-box testing method is used to achieve this task.  Considering a code as a bulk program, unit testing will deal with each pieces of the code that comes together to form the code and make sure that each section of the code passes the test.  In this regards, it is easy to figure out which part of your code have a problem and solutions to non-functioning section of the code can be easily resolved. Sections of code can be testing on the go as they are created rather than wait till the end which might give you hard time figuring problem. Unit testing requires the knowledge of the internal program design and code.

Integration Testing: Integration testing is done after unit testing and it is a testing of combined parts of an application to determine their functional correctness. Unlike the unit testing which test individual pieces of the code, Integration testing gives you the opportunity to gather all the pieces or sections and test them as a group. This will enable you to determine how efficiently all the unit of your code is working together or to technically verify proper interfaces between modules and sub systems.

System Testing: System testing ensures that the system is in line with all the requirements and has meets the quality standards as well the code design purposes. It is the first level of testing the whole code or program to make sure the entire program is working as one unit. System test is often done by an individual who is not part of the developing group and it is very necessary because it ensures the program or code is meets the technical, functional and business requirements that they were tasked to design.

Acceptance Testing: Acceptance testing is related to the user and is designed for the user to test the system to see if it meets their standard. In other words, it is to verify that the system meets the user requirements. In this stage of testing, if nothing changes and the software passes, the program will be delivered to the entity they that need it and the programmers work is done.

It is important for one to note that all these levels of testing are done progressively from unit testing to acceptance testing. I have come to note that I cannot jump to acceptance testing without first doing the unit, integration and system testing. This is going to have a great impact in my career as I have now known and understand clearly how the testing is done.

References: https://www.seguetech.com/the-four-levels-of-software-testing/

 

 

From the blog CS@Worcester – Computer Science Exploration by ioplay and used with permission of the author. All other rights reserved by the author.

QA & TESTING – Episode 1

AB Testing – Episode 1 by Brent Jenson and Allen Page.

In this weeks testing episode, I went back to episode 1 just so I can address topics that Allen and Brent found necessary to start their testing podcast episodes with. Both Allen and Brent are high-end software developers and testers who worked for many big companies and performed many big tasks in the world of software developing and testing. Allen page was a software-testing manager who contributed to many books in the world of software testing. (His books are really good if you wanted any information of software testing). Brent Jenson also worked for Microsoft for over 20 years and accumulated many experience holding the position of software testing Director. They continued to talk about a presentation method used at Microsoft which I thought would benefit the software testing industry should we all decide to utilize it. They called it the Lean coffee lives. As comical as this sounds, lean coffee is a structured, but agenda-less meeting. Participants gather, build an agenda, and begin talking. Conversations are directed and productive because the agenda for the meeting was democratically generated. These agendas are often address to things viewed as highly important and then goes all the way down to items on the list, which is viewed, as less important. So this sounded like something we can bring to our Testing Team meetings!! As quickly as the podcast began, Allen began to dive into real Software testing concepts. He began by emphasizing the great difference that lies between testing and quality. With constant changes and improvising’s, system and program bugs quickly lose values. Finding bugs on constantly or sometimes daily changing software does not constitute to the quality level of the software product. This is because today’s bug can be fixed in tomorrows code implementation and that can also create a new bug that could be fixed with the next program/code modification. Now here is the case that is it the Job of software testers to find bugs and errors in the program. Now it’s the job of a test manager to schedule test runs and passes for the specific product in development. How do you think a test manager could work successfully an environment where code changes and modifications are being made on a daily base? The proposed solution goes back to the beginning of the project where planning and thoughts have to be put in place. To put forth a great product, time allocation for testing has to be incurred in the project timeline. You can put out quality without considering all aspects of possible challenges and inputs.

From the blog CS@Worcester – Le Blog Spot by houtyr and used with permission of the author. All other rights reserved by the author.

The Dangers of Relying on Automated Testing

After listening to Jean Ann Harrison’s discussion about how important critical thinking is in the context of software testing and quality assurance on an episode of Test Talks, I wrote a post about The Limits of Automated Testing. Although Harrison’s explanation was great, I had a few remaining questions and this week chose to look for more information on automated testing. I came across a post by Martin Jansson from March 2017 titled Implication of emphasis on automation in CI, and it seemed to provide me with the more comprehensive view of testing automation that I was looking for.

Jansson starts out on a positive note, stating that he “less frequently see[s] the argumentation that testing is not needed.” To me it is almost comical to think about someone arguing that testing is unnecessary. While I completely understand that managers and executives are enticed by the possibility of saving time and money by not testing software, this is an extremely risky and careless method of creating a product. I doubt that anyone releasing untested software lasts very long or makes any money in the industry.

So if not testing at all is not an option, what are the options? Going with the bare-minimum for testing would be running only automated tests, a method that Jansson says is actually used. I have to agree with Jansson, however, when he says that this is not testing, rather it is simply checking. Instead of exploring parts of the code that are likely to contain bugs, you will simply be checking acceptance criteria. By not exploring the code fully, you are failing to find anything that might be outside the scope of the specification or the requirements. I feel that the following graphic provides an excellent representation of how few tests are actually performed when following a testing strategy that relies solely on automation.

(Source: http://thetesteye.com/blog/2017/03/implication-of-emphasis-on-automation-in-ci/)

What constitutes the perfect blending of automated and manual testing may be impossible to know. What is certain, however, is that automated testing cannot be relied upon as the sole method for testing. Jansson puts it in layman’s terms when he says that “you rarely automate serendipity.” Just as Jean Ann Harrison points out in the Test Talks podcast mentioned earlier, automation is not and will never be a replacement for thought. It is a bit of a relief to know that the software development companies are maturing and beginning to understand the importance of having testers who use a combination of automated and manual testing. As long as there continues to be humans writing code, there will need to be humans who test that code.

From the blog CS@Worcester – ~/GeorgeMatthew/etc by gmatthew and used with permission of the author. All other rights reserved by the author.

Post #7

I began researching good JUnit practices as a follow-up to our discussions of it in class.  I found a post on the codecentric Blog by Tobias Goeschel entitled “Writing Better Tests With JUnit” that addresses the pros and cons of JUnit and provides tips on how to improve your own testing.  This is the most thorough article I’ve found on JUnit testing (and possibly longest), so it seems fitting to summarize it in a blog post of my own while we cover the subject in class.

From the blog CS@Worcester – by Ryan Marcelonis and used with permission of the author. All other rights reserved by the author.