Author Archives: fymeri

Headless Browser Testing and Selenium

Today I’ve discovered the amazing world of browser testing.

I’ve been learning about tools lately in our final classes, such as Pit Testing. But using an already existing tool, a web browser, to automate tests, was a really cool discovery.

Over on Awesome Testing’s blog, they have many posts talking about Selenium, which made me finally look it up. On the home page of Selenium, they proudly proclaim, “Selenium automates browsers. That’s it!” They know how amazing just that is. By automating a web browser, the capabilities are nearly limitless. You can distribute scripts across many environments. Create bug reproductions scripts, and scripts to aid in automated exploratory testing.

By using versatile and common tools such as web browsers, including the most popular ones like Chrome and Firefox, one can test all manner of things. Browsers can read html, styling elements, javascript, and AJAX. They can gain incredible amounts of information and interact with web pages in ways that with just a small amount of automation can test almost everything about a web page and thus web sites. As browsers also have the ability to view certain files such .pdf files, this increases their ability to test.

The possibilities with Selenium are really wonderful to think about. But the post by Awesome testing today is talking specifically about headless browser testing.

What’s a headless browser? Simply a browser without a Graphic User Interface. So instead one uses a command line like interface or network interface. This is helpful for Continuous Integration in that a display might not always be available. Unix systems, for example, don’t have display outputs on by default. In which case, headless browsers allow us to test them instead of using combinations of other tools to do the same job.

By combining Selenium and a headless browser we can do headless browser testing on servers and web sites. It’s so simple, and also so interesting. This gave me a glimpse of the way professional testers combine multiple tools along with coding, most of the article is dedicated to showcasing java code for headless browser testing in Firefox, to create their own toolbox of software for making sure things work. It also showed a concrete example a testing method used in Continuous Integration, which was nice. I was also introduced to a very exciting new tool, Selenium. Having a new toy to play with is always exciting though.

Original post: http://www.awesome-testing.com/2017/09/firefox-selenium-browser-capabilities.html

From the blog CS@Worcester – Fu's Faulty Functions by fymeri and used with permission of the author. All other rights reserved by the author.

Engineering Productivity

This post is about a new development in software testing, a possible evolution that makes so much sense logically to me I can’t believe I didn’t draw the conclusion earlier in my posts about Test Ops.

Google has recently (ok, more like earlier in the year) renamed their Testing Automation Conference into the Engineering Productivity conference. They also did the same with their team.

And this instantly went back to all the Test Ops posts I’ve been reading.

All of them showed the natural progression of simple testing into something wholly devoted to increasing productivity itself of an engineering project. Continuous Improvement, Testing and Development. Testing not only if a product works but if it is working effectively or not. Testing and Quality Assurance are simply the beginning of using software to ensure increased productivity of software engineering projects.

The essence of it is to let the developers themselves handle the tasks of making sure the code works and is of good quality. This means that they will be making tests for their own code. Obviously, just doing this would not be very effective, as developers might not be great at testing. This is why a new task for the Engineering Productivity team is provide guidance and tools to developers to help them more effectively produce effective tests with good code coverage and that will make sure the performance is reasonable. By doing this, the responsibilities of the team are lowered and they can take on more responsibilities in terms of aiding with Continuous Development, and using big data techniques to test the how effective features are at keeping users (for projects where this is applicable) so developers know where to steer productivity.

Of course, there needs to be flexibility. Focusing on testing in the beginning seems to be very effective, leading to there being much less issues down the road. So the team focuses their time on things like pending time on IDE plugins, code coverage, and effective code review. After release, the team can shift over to Testing in Production tasks, especially when features or updates start development. A responsibility the team always has is making sure there are no testing bottlenecks. If a test takes too long, people will stop running them altogether at some point.

This is really a formalization of all the other topics I’ve posted on. Seeing the evolution of a dedicated testing team, the blending between developer and tester, and finally leading to the testing team becoming a team focused not only on testing but increasing production in general is quite fascinating. In a way, the testing team was always about increasing productivity, and formalizing it has made that apparent.

Original Post: http://www.awesome-testing.com/2017/07/testops-5-engineering-productivity.html#more

From the blog CS@Worcester – Fu's Faulty Functions by fymeri and used with permission of the author. All other rights reserved by the author.

Continuous Development

Continuing on with the TestOps posts by the, well, awesome Awesome Testing blog, is Continuous Development. Which is actually very interesting as it was a large part of what was taught in my Software Process Management course last year, so it was an enjoyable surprise to see this as the next covered topic.

Generally speaking, Continuous Development is, according to Wikipedia, “the process of executing automated tests as part of the software delivery pipeline to obtain immediate feedback on the business risks associated with software development. ”

The first step is Continuous Integration and unit tests. After every single commit by a developer, the main branch app should be compiled and built, and then unit tests should b executed to give the quickest feedback possible. The post suggests using mutation testing, testing that adds random faults to your code to see how well your tests perform, to test the unit tests themselves to see how good they are. After that, the developer should be made aware of their commit changed overall code coverage statistics.

The next step is Continuous Delivery or Automated Deployment. One should do numerous test environment deployments to test the deployment process of the application as well. After this is testing higher level things, such as functionalities on the integration or API level. End to end testing is very expensive, resource-wise, and should be done sparingly.

After that is performance testing, using a testing environment as close to the production environment as possible. You want to see how the application handles heavy loads. And then is security testing, to make sure the application is as safe from being hacked as you can manage, and then the hardest step, exploratory testing. This is a manual exploration of the application, that takes a lot of time and resources. It should be done sparingly as well.

Overall, this was another nice intersection between software development and testing. It was also a good reminder of concepts I learned in the very recent past which I found very interesting at the time. The ability to streamline the process for a developer and to give them feedback as quickly as possible is incredibly important, its ability to foster greater productivity readily available. To create such, there a many useful tools out there for testers and developers alike. Its a very straightforward example of testing directly helping developers, which is nice to see.

Original Post: http://www.awesome-testing.com/2016/10/testops-3-continuous-testing.html

From the blog CS@Worcester – Fu's Faulty Functions by fymeri and used with permission of the author. All other rights reserved by the author.

Future of Testing Continued

A while ago (long while), I talked about an interesting post about the something called Test Ops in this post: https://fusfaultyfunctions.wordpress.com/2017/09/20/the-future-of-testing-taking-an-interesting-turn/.

Now I’d like to talk about a post by Awesome Testing describing an important topic in Test Ops, Testing in Progress. Essentially, its a set of ways of testing that utilizes real users and the different ideas and implementations that arise in a production environment. So how do you test a new feature or update produced for a service.

Obviously, the one metric is that it works without errors for the users. But the next most important metric, is the number of users it retains. The amount of people using the service and continuing to use the service is the most important thing for these applications. And this needs to be tested.

Now what do you do when you produce a new feature and need to test it? You could just throw it out into the wild and then see how the statistics work out. If it worked, keep it, otherwise throw it away. But that can annoy users and make you lose people.

There is no one best way, but there are several different ones used. There’s risks that need to be mitigated. So the first method outlined is Blue-Green Deployment, or Canary Deployment. You deploy the new feature or software on a separate series of servers, the blue pool. Preliminary tests are done, internal, users, and then if it looks good 5% of users are redirected to it from the original servers, the green pool. Then you can see how well the new software is working. If it doesn’t look good, move everyone back to the green pool.

Test Flights are similar. You hide a new feature in a code path, with another code path without the feature. By changing a config file, you show the new feature to users in the same manner as in Canary Deployment. First internal users, then lets say 5%. The feature can always be reverted with a change of a config file. A/B testing is a bit more extreme, essentially you have, say, two variations of an application. Fifty percent of users see one and the other, and the one that retains the most users becomes the finalized version.

There’s also a technique where faults are intentionally injects in software. In this way, it leads to a design that focuses on being secure. And then there’s one popularized by Microsoft. Developers are forced to use the applications that are being developed locally, to ensure the program is a reasonably good user experience.

Overall, it’s really interesting seeing the considerations required when dealing with testing new software. I never considered that not only would testing test for working product, but one that works well too. It makes testing a much more complicated, yet exciting field. It also makes the job of a tester much more integral to the success of an application.

Original Post: http://www.awesome-testing.com/2016/09/testops-2-testing-in-production.html

From the blog CS@Worcester – Fu's Faulty Functions by fymeri and used with permission of the author. All other rights reserved by the author.

Software Testing without a Software Tester

Unfortunately, or fortunately, this post is not about automatic testing. According to the author of this interesting post, James Bach, automatic testing is as feasible as automatic programming. That is, not very feasible. Yet, at least.

In a very nice story, James writes a program while his non-programmer and non-tester sister, Erica, stays by his side. He walks through his logic while writing and his sister points out issues while he does so. Despite not having any computer science background, the mere presence of an interested companion leads to better, less issue-ridden programs. This made me remember something from a long time ago, when I was told about the somewhat infamous rubber duck test.

For those that don’t know, the rubber duck test is when a programmer, aloud, describes how his or her program works…to a rubber duck. It sounds ridiculous, but the mere act of trying to explain ones own work in an understandable way can lead to discovering issues that might otherwise be unnoticed. A design fault that seems obvious in hindsight but was looked over as the programmer becomes tunnel focused while in the midst of programming.

This very neatly explains the need for a tester, or at the very least something to get a developer to think in a different way about their work. Along with this, it also demonstrates the natural problem solving nature of a programmer that seems inherent. The need to resolve the conflict of a companion not understanding you is something familiar to me. My middle-school aged brother sometimes asks me about stuff I’m doing, which very many times turns out to be classwork. I have surprised myself with how much I wanted to clear up any looks of confusion I got when attempting to explain a mathematics problem or a program I was having issues with.

This natural motivation of wanting to be understood can lead to much deeper introspection, especially when the other party is not very familiar with the concepts being explained. It often leads to a long series of “Why?” questions. Something James himself calls ‘drilling down’, continuously being asked for more details. Trying to frame key concepts one believes they understand can lead to realizing flaws in ones understanding or incompleteness.

I really enjoyed this article, the story was nice and familiar. It made me realize how basic a necessity a tester, or at least just someone other than the programmer or something else other than the programmer is to creating a program. Or while not a necessity, such an incredible boon to the entire process, that it should be.

Original post: http://www.satisfice.com/blog/archives/852

From the blog CS@Worcester – Fu's Faulty Functions by fymeri and used with permission of the author. All other rights reserved by the author.

Potential Future Traps

After the last post my mind has been on the future of software testing, but after a week or so it’s taken a bit of a journey and now I’m wondering about my future in software testing. And as it usually does when I think of the future, negativity is on the forefront. Which is why potential pitfalls are something I’m really interested in. So wonderfully, I found a helpful post on Awesome Testing listing possible software testing traps and describing them. The ‘traps’ in this case are mistaken implementations or uses of software testing that lead to very unfortunate results, based on the authors own experiences and what he has read or discussed with others.

It is incredibly interesting seeing the ways companies mistakenly attempt to optimize use of testers. Rating them on the amount of bugs found or incentivizing them to find as many bugs as possible only were issues that seem obviously awful to me, but I have hindsight on my side. A good amount of the traps were something I didn’t consider, issues with not having developers do any testing themselves. By leaving only testers as the ones to run tests and do quality assurance, it led to developers using testers being turned into scapegoats for issues or testers becoming a commodity that is moved around too often to do their job correctly.

Trap number three shows the human side of the process, to me at least. If testers make bad tests and have to constantly fix them, or the tests offer very little information due to their issues, then developers become disillusioned with them. It becomes safer and more likely for the tests to become more and more ignored. Software testing is not just finding bugs in programming, it must do so reliably and in a way that aims for improvement. Developers and testers are also not as  clear cut as one would think. Or they shouldn’t be, at least.

I feel much more confident now knowing many of the issues that one can accidentally fall into in the process of software creation and testing, and how to possibly avoid them. I also found the idea of testing not simply being about finding bugs, but just a part of a greater system working towards an end goal of a wonderful software product insightful. It is incredibly useful to understand the full Software Engineering Life-Cycle no matter where your job lands on it.

From the blog CS@Worcester – Fu's Faulty Functions by fymeri and used with permission of the author. All other rights reserved by the author.

The Future of Testing Taking An Interesting Turn

So, I was reading a blog called Awesome Testing the other day, (really clever name, I know, it’s what caught my attention) and I saw a whole section of pages titled “TestOps”. Intrigued, I ventured further by reading the posts, which led me to the initial blog post that coined the term, by Seth Elliot. The short and sweet of it is that originally testing can be represented by this diagram:

software_testing_model

The tests are created, they are run on the system, the results are obtained, the results are checked against the ‘oracle’, and an assessment is completed. However, the interesting thing is the current trend in software products. The most popular products now a days are not simple programs, but are actually services. Facebook, Amazon, Twitter, etc. With this new change, testing becomes a bit more different. Big Data concepts are used, new features are tested with exposure control and monitors. This becomes the new model:

model

The testing system becomes a whole architecture testers are serviced with maintaining and using. This whole post was incredibly interesting to me. As someone new to Software Testing, my experience up until now has been discrete test cases and suites. With TestOps however, testing becomes a whole new beast. Using Big Data techniques to test how well a service is doing, not just if it runs correctly was a definite surprise. As someone who is currently dipping their toes into Big Data as I take a Software Quality Assurance course, I didn’t expect the areas to meet like this.

The final paragraphs of the post describing how this development is also blurring the lines between tester and developer is exciting. There was the worry in the back of my head, that software testing and development were two divided areas. That one had to choose one or the other. But knowing how much they will intermingle in the present and future helps alleviate this to a great extent.

It also makes a large amount of sense. As the software being tested becomes much more dynamic, testing must as well. Not just testing if software is working, but working well is quite the interesting distinction that requires more complicated solutions. These tests will require a process similar to software development to create. Choosing the right kind of Architecture, using tools similar to software process management tools to cut down on the time QA requires to make new tests or change existing ones, and the continuous updates and integration.

The incredibly interesting blog post can be found here: https://dojo.ministryoftesting.com/lessons/the-future-of-software-testing-part-two

The original blog post that sent me to it: http://www.awesome-testing.com/2016/07/testops-missing-piece-of-puzzle.html

From the blog CS@Worcester – Fu's Faulty Functions by fymeri and used with permission of the author. All other rights reserved by the author.

What’s Up World!

First Post of the blog! Exciting stuff.

The main focus for now is going to be Quality Assurance and Software Testing. The faulty functions in question are hopefully not going to be ones I myself create.

The picture is actually really symbolic, by the way. Let’s see if the Anteater symbolizes me and the blog or if the glass does.

From the blog CS@Worcester – Fu's Faulty Functions by fymeri and used with permission of the author. All other rights reserved by the author.