A while ago (long while), I talked about an interesting post about the something called Test Ops in this post: https://fusfaultyfunctions.wordpress.com/2017/09/20/the-future-of-testing-taking-an-interesting-turn/.
Now I’d like to talk about a post by Awesome Testing describing an important topic in Test Ops, Testing in Progress. Essentially, its a set of ways of testing that utilizes real users and the different ideas and implementations that arise in a production environment. So how do you test a new feature or update produced for a service.
Obviously, the one metric is that it works without errors for the users. But the next most important metric, is the number of users it retains. The amount of people using the service and continuing to use the service is the most important thing for these applications. And this needs to be tested.
Now what do you do when you produce a new feature and need to test it? You could just throw it out into the wild and then see how the statistics work out. If it worked, keep it, otherwise throw it away. But that can annoy users and make you lose people.
There is no one best way, but there are several different ones used. There’s risks that need to be mitigated. So the first method outlined is Blue-Green Deployment, or Canary Deployment. You deploy the new feature or software on a separate series of servers, the blue pool. Preliminary tests are done, internal, users, and then if it looks good 5% of users are redirected to it from the original servers, the green pool. Then you can see how well the new software is working. If it doesn’t look good, move everyone back to the green pool.
Test Flights are similar. You hide a new feature in a code path, with another code path without the feature. By changing a config file, you show the new feature to users in the same manner as in Canary Deployment. First internal users, then lets say 5%. The feature can always be reverted with a change of a config file. A/B testing is a bit more extreme, essentially you have, say, two variations of an application. Fifty percent of users see one and the other, and the one that retains the most users becomes the finalized version.
There’s also a technique where faults are intentionally injects in software. In this way, it leads to a design that focuses on being secure. And then there’s one popularized by Microsoft. Developers are forced to use the applications that are being developed locally, to ensure the program is a reasonably good user experience.
Overall, it’s really interesting seeing the considerations required when dealing with testing new software. I never considered that not only would testing test for working product, but one that works well too. It makes testing a much more complicated, yet exciting field. It also makes the job of a tester much more integral to the success of an application.
Original Post: http://www.awesome-testing.com/2016/09/testops-2-testing-in-production.html
From the blog CS@Worcester – Fu's Faulty Functions by fymeri and used with permission of the author. All other rights reserved by the author.