Holiday Season Quality Assurance

https://testingpodcast.com/

This week, I listened to PerfBytes Black Friday, Hour 4 at the link provided above. This podcast is centered around the business practices of major retailers and how that reflects on the work of IT departments. Especially right after thanksgiving, retail companies see a huge spike in consumer activity with black Friday and the holiday season, which results in a big strain on eCommerce.

They begin a practical assessment of retail websites by scoring the load times and response requests over time. They discovered that some retailers don’t put enough resources into their websites. Olight is a manufacturer of flashlights whose website scored a D, which is unacceptable from a consumer standpoint especially around black friday and the holiday season. For comparison, they also scored a competing website, Maglight, which performed the same or worse in all areas.

Clearly, these shopping websites were not up to par with the expected performance and don’t seem to have an interest in upgrading. Major online sellers like Amazon and Home Depot also sell these products but with a much more marketable look and feel to their web pages. This test proves that these manufacturers should invest more in their website and might make more revenue by selling directly rather than paying a margin to the dominating online retailers. The need for quality assurance exists, but there also exists a trend from not well known companies like Olight and Maglight to let bigger shopping websites carry the majority of their sales for them at a cost. In this way, it may seem not worth the effort of upgrading a website.

Next the hosts go over ways to better communicate performance engineering between multiple departments especially around high traffic times of the year. Typically, the conversation starts with cost estimates going by a specific plan for that year. The priority for business leaders higher up the corporate chain is to cut costs as much as possible, however, cutting the IT departments budget results in drastic sales consequences. The best way to cause change in the industry and prevent these risks is to better communicate cost.

From the blog CS@Worcester – CS Mikes Way by CSmikesway and used with permission of the author. All other rights reserved by the author.

It’s Not Just Usability

This week I read a post of Joel Spolsky, the CEO of Stack Overflow. This post talks about designing social interface which is the next level of software design issues, after you’ve got the UI right, designing the social interface. Software in the 1980s, when usability was invented, was all about computer-human interaction. A lot of software still is. But the Internet requires a new kind of software: software that’s about human-human interaction such as discussion groups, social networking and online classifieds. It’s all software that mediates between people, not between the human and the computer.

When you’re writing software that mediates between people, after you get the usability right, you have to get the social interface right. The social interface is more important because the best UI software would fail with an awkward social interface. More, let’s look at an example of successful social interface. Many humans are less inhibited when they’re typing than when they are speaking face-to-face. Teenagers are less shy with cellphone text messages, they’re more likely to ask each other out on dates. That genre of software was so successful socially that it’s radically improving millions of people’s love lives (or at least their social calendars). Even though text messaging has a ghastly user interface (just being a little bit improved recently), it became extremely popular with the kids. The joke of it is that there’s a much better user interface built into every cellphone for human to human communication: this clever thing called “phone calls.” It is so simple that to dial a number, and after that everything you say can be heard by the other person, and vice versa. However, many people choose the way that you break your thumbs typing huge strings of numbers just to say “damn you’re hot.” Clearly, it more awkward to say this than texting!

In designing social interface, you have to look at sociology and anthropology. In societies, there are freeloaders, scammers, and other miscreants. In social software, there will be people who try to abuse the software for their own profit at the expense of the rest of the society. Whereas the goal of user interface design is to help the user succeed, the goal of social interface design is to help the society succeed, even if sometimes it means one user has to fail.

Social interface has rapidly grown and developed together with social networking. Software companies hire people trained as anthropologists and ethnographers to work on social interface design. Instead of building usability labs, they’ll go out into the field or online space and write ethnography.

Article: https://www.joelonsoftware.com/2004/09/06/its-not-just-usability/

From the blog CS@Worcester – ThanhTruong by ttruong9 and used with permission of the author. All other rights reserved by the author.

Design Patterns: Proxy

Earlier this semester in my Software Design class, we had an assignment where we were to choose to write a report on either the Proxy, Facade, or Decorator design patterns (or any combination of the three, for extra credit). I had chosen the Decorator pattern, and due to the amount of assignments I had due I never went back to examine Proxy or Facade. So I’m here to do just that with the assistance of Sourcemaking’s article on it, starting with Proxy.

The Proxy is a structural design pattern that adds a wrapper to an object so that the object itself doesn’t suffer from excess complexity. There are actually a large number of reasons where this is useful. Sometimes objects get excessively resource heavy, and you don’t want to instantiate them unless absolutely necessary. Sometimes you’d just like an extra layer of protection from the access of an object for the sake of security or for general ease of use. Consider getter and setter methods for an example — you don’t want open access to the data within your object, so you make the data private (hidden from outside access) and you instead create public methods for retrieving and change the data of the private variables. In a way, getter and setter methods are mini proxies. Of course, the difference is that proxies are meant to be entire objects in themselves.

For a real-world example I’ll reference Sourcemaking’s article. In order to make a payment, someone would use the funds that they have in their bank account. Instead of needing to add methods such as “makePayment()” to their account and increasing the Account’s complexity, it is possible instead to pay with a check which can indirectly access the funds of the account. In this example, the check is the proxy to the Account class. Here’s a UML-like diagram:

Taken from Sourcemaking.com

The Proxy design pattern serves many purposes and is perhaps one of the easiest design patterns (in my opinion, of course) to understand and use. It’s very similar to Decorator in structure (which I did a project on earlier this semester) but its’ implementation is slightly different. Decorators are used to add new functionality to an object, whereas the Proxy is designed to encapsulate existing functionality into another, “adjacent” object.

From the blog CS@Worcester – James Blash by jwblash and used with permission of the author. All other rights reserved by the author.

Two-Factor Authentication

For my final blog post of the year, I wanted to select a more interesting topic. The article Two-Factor Authentication with Node.js by David Walsh covers two-factor authentication and shows how it works behind the scenes using JavaScript. He even includes examples of how to implement two-factor authentication using QR codes.

Two-factor authentication is a user-verification method used by many web applications and services today. When a user tries to log in, a verification code is sent to a previously specified external device or address, such as a phone number or email. This code usually expires after a set time. The user that requested the code then enters it in the application or program to verify themselves and gain access to their account. By splitting access to your account into multiple different forms, you greatly increase the security of your account. Nobody can log into your email, even if they know the password, if you have two-factor authentication that is being sent through SMS.

David Walsh explains how the system of two-factor authentication is divided up behind the scenes. The first step is to generate a secret unique key for the user to validate two-factor authentication codes in the future. Then, you must add the site to your authentication. Finally, this code must be provided by the user and then validated to confirm it matches what was expected. If not, the user can try again with a new key. Since most two-factor authentication services refresh the key every few seconds/minutes, it is not necessary to lock the user out if they get one wrong.

Two-factor authentication is an incredibly powerful and important new form of security that allows you to put an even tighter lock around your information. As a developer, it is important to consider allowing your users to use two-factor authentication for your application, especially if any sensitive data is stores or if access to your account leads to vulnerabilities in the system. It’s one of those things you don’t realize is so important and valuable until you get your account stolen and realize it could have been prevented.

From the blog CS@Worcester – Let's Get TechNICKal by technickal4 and used with permission of the author. All other rights reserved by the author.

Test-niques

The article Test Case Design Techniques to Ensure High-quality Software on ReQtest brings up the three major categories test design techniques are generally classified into. These techniques are specification-based, structure-based, and experience-based. Each of these techniques covers different testing methods we covered in CS-443, especially structure- and specification-based techniques.

Specification-based techniques are also referred to as black-box techniques, which should give you some idea of what kind of testing this category covers. Testing techniques under this category are generally written to the technical specifications and client’s requirements. These testing techniques rely more on an understanding of what a program’s intended functions are, and what different states it can find itself in performing those functions. The actual structure of the code is not considered. Using specification-based testing, you can verify that your program works the way it was intended and is written to specification.

Structure-based techniques are conversely referred to as white-box techniques. As expected by the name, these testing techniques test the actual code that is written, and requires knowledge of the code and its internal structure. Many of the testing techniques of this type involving changing conditions and values and making sure code works in multiple cases, validating each branch of the code. Structure-based techniques help highlight any glaring structural or logical issues within your code you may have overlooked.

The final category is experience-based techniques. This category is pretty broad, mostly involving techniques that rely on prior knowledge or information that couldn’t be gained just from testing. This is the type of testing you just can’t plan for. However, the more you test, the more you can apply knowledge from previous tests to fix issues before they become back-breaking.

Dividing the testing into different categories makes their usefulness easier to understand. Structure-based techniques are more difficult and require more knowledge, but are exhaustive and effective at finding issues in the code. Specification-based techniques verify that the parts of your code that matter are functioning correctly. Experience-based testing simply leverages your knowledge of an area to improve testing. All these techniques have valid uses in the world of testing.

From the blog CS@Worcester – Let's Get TechNICKal by technickal4 and used with permission of the author. All other rights reserved by the author.

Testing Gone Wrong

In his post Six Things That Go Wrong With Discussions About Testing, James Bach lists six different things that can often go wrong when testers are discussing testings. His six reasons can be summarized as testers misunderstanding the purposes of testing and not following the proper procedure when it comes to testing.

Your goal when testing is to discover any vulnerabilities in your program, but also to find strengths and things your program is doing well. This covers James’ first two points. When it comes to testing, quality is much more important than quantity. It is better to have a more complete coverage of your program and all its branches than to have a bunch of tests doing the same thing and missing different parts of your program. This is why it is important to think of tests as events instead. What is important is what your test is testing and how it goes about that, and it shouldn’t be thought of as some ‘Generic X Test.’

Often times people get carried away with automated testing and rely too much on it. While it is good to use automated testing, developers have strength far beyond that, and should use automated tests as tools to better accomplish effective testing, not as automated workers that do your work for you. Thinking of automated tests in this way also distracts from the purpose of testing, and makes it easier to forget why you should run certain tests in the first place.

James’ last point is probably his most important one. Testing isn’t just some set task that can be navigated a certain way every time. What I’ve learned this year is that testing is about thinking critically about a program and the way it COULD act, and design your tests according to those parameters. Consider where things can go wrong, and try to test the strength of weak parts of your code. But most importantly, be ready to learn, as every program has a different set of tests with a different strategy that is most effective for testing. Testing is a dynamic thing, and testers must be dynamic people.

From the blog CS@Worcester – Let's Get TechNICKal by technickal4 and used with permission of the author. All other rights reserved by the author.

Back to Front

In his post Back-end Development vs Front-end Development, Mikke Goes explains the differences between back-end and front-end development. He also goes into detail about the places where they intersect. Mikke also speaks briefly on combining both areas of development into what is known as full-stack development.

Back-end development concerns itself with the storage and manipulation of data. For example, when you log into your email, your browsers sends a request to the server to return all the information concerning your account, such as your settings and inbox. The mechanisms behind the storage and transferring of this information is the back-end. Back-end development concerns itself with the parts of a program you don’t usually see, but does a lot of work behind the scenes.

Front-end development finds its responsibilities mostly in the visual aspects of your program or application. All the buttons, text, and input fields on a web page are designed by front-end developers. This gives the users ways to interact with all the information stored in the back-end, creating an interface that bridges the gap between the client and the server. Both the functionality and appearance of web pages can fall under the duties of a front-end developer.

Full-stack development puts it all together. This is an understandably powerful position to operate in, as working on both the front- and back-end together ensures that they will be designed with each-other in mind and written properly and effectively. In CS-343 this semester, we have to work on a project where we are effectively full-stack developers. It is challenging to have to work on both at the same time, but I think working on one helps your understanding of what needs to go on the other end to tie everything together.

Front-end and back-end development are both important concepts. A web application isn’t going to exist without one or the other, and both are ultimately just as important to the end result as each-other. The differences are apparent, which is why it makes sense some people make a living doing one or the other. Ultimately, Mikee makes a good point that understanding both types is a boon to learning how to develop web applications.

From the blog CS@Worcester – Let's Get TechNICKal by technickal4 and used with permission of the author. All other rights reserved by the author.

The Value of Testing

The blog post How Do I Know My Tests Add Value? by AutomationPanda discusses the value of proper testing. The author brings up the common issues that sprout from use a bug-fixed count as a metric for successful testing, and highlights why testing is important and can help improve development.

Testing is important because it validates that your program is working as it is expected to. Well-written tests with the proper amount of coverage give you information about your program and whether any changes need to be made. A passing test indicates correctness and that your program is working as expected. A failing test points out a bug that needs to be fixed and helps highlight weak parts of your code.

The absence of testing might not necessarily lead to bugs, but we all know as programmers that even when it seems your code is running exactly as you intended it to, logic errors can happen and sometimes things weren’t written with every situation in mind. When testing is in being implemented, developers are accountable for the code they write, and must think carefully about the issues that may arise.

Tracking bugs isn’t necessarily effective because it encourages testers to find issues even when there may not be any. All tests passing is still good news and a sign of progress, even if it just means you are doing everything right. Enforcing bug quotas and forcing testers to find issues means they will expend effort looking for things that aren’t there and nitpicking small issues rather than writing more tests or making sure they have good coverage, which are far more effective and finding issues in your code.

At the end of his post, the author suggests some different metrics to use: time-to-bug discovery, coverage, and test failure proportions. All of these serve as much more accurate and effective measures of determining whether your testing has value. Whatever metric you use, it is important to think about why you are testing and whether you are doing it in a way that tells you something about your program.

From the blog CS@Worcester – Let's Get TechNICKal by technickal4 and used with permission of the author. All other rights reserved by the author.

Model-View-Controller

On his blog, Mikke Goes takes the time to explain the Model-View-Controller design pattern in his post What is the Model-View-Controller (MVC) Design Pattern?. He uses the analogy of an ice cream shop to describe the different functions of the components. The waiter is the view, the manager is the controller, and the person preparing the ice cream takes the role of the model. Together, when a customer makes an order, they each can perform their responsibilities and successfully handle the customer’s request.

The Model-View-Controller design pattern separates the components of your code into sections that divide logic from interface. Keeping the functionalities separate from each other will make your application easier to modify in the future without running into issues. Each of these different groups has a different responsibility when it comes to the application and how requests are handled.

The view consists of the parts of your application that your user will see and interact with. It is not very smart, only outputting the information given to it by the controller. The view helps users make sense of the logic behind your application and interface with it.

The model is the opposite, dealing with all the logic and data manipulation behind wheels of your application. The model responds to requests by processing any data in the necessary ways and giving it back to the controller in a form the view can understand.

The controller handles the communication and interaction of these two. When a request is put in through the view, the controller brings this to the model, and takes the model’s output back to the view to be displayed. It is a middleman that helps connect the two other layers of responsibility.

The Model-View-Controller design pattern seems like a pretty simple design pattern to comprehend. All of the components are divided by responsibility and the program is written with this in mind, making sure that only certain components handle tasks that are within their category. In terms of our projects, the front-end would amount to the view and the back end to the model, with the typescript file functioning as the model. It seems like the development of web applications would sort of naturally fall into this design pattern.

From the blog CS@Worcester – Let's Get TechNICKal by technickal4 and used with permission of the author. All other rights reserved by the author.

An Introduction to Code Reviews

In my Software Quality Assurance & Testing class, we recently did a group-based code review of a simple Sir Tommy Solitaire program that my professor wrote a few years ago. It was a really fun and interesting group activity that I enjoyed quite a bit, so I decided to look more into common Code Review practices to get a better understanding of how to conduct them in a better way. This article written by Trisha Gee I found on DZone.com was a great resource for learning more about them.

First and foremost, the most important thing to remember when conducting code reviews is that your job is to view the code within the parameters of the project you’re working on. For example, if your company uses Checkstyle and prefers Google Java Conventions vs Sun Code Conventions, then the team must keep that in mind when reviewing code. All reviews must be viewed through a lens that corresponds with the vision of the project at hand.

When you’re actually conducting the review, what are the easiest things to look for? The article from DZone mentioned these four topics:

  • Formatting: Things line curly braces, spacing, line breaks. 
  • Style: Is the code laid out in a logical way (variable declarations near their usage, etc.)
  • Naming: Are naming conventions upheld throughout the program? Are they descriptive enough?
  • Test Coverage: Are there tests that cover the code and what it interacts with?

However, there are plenty of tools that exist to mitigate the potential errors which come from these easily spotted issues. Humans aren’t so great at noticing minute details — something that machines are perfect for. Formatting tools like Checkstyle that I referred to earlier will usually ensure that formatting standards are upheld, and tools like JaCoCo can assist test coverage (we’ve used both of these in class and it’s remarkable how helpful they are).

Really, the job of the reviewer is to focus more on the design choices and quality of the code. Not only should the code run, but it should be readable, maintainable, it should be written with good design principles in mind. Are any parts reused, are design patterns used in elegant ways or do they needlessly increase complexity?

I haven’t quite yet read through all of Clean Code by “Uncle Bob”, but from what I know it seems like many of the principles in that book are ones that should be deeply considered when conducting a code review. I think anyone looking for further elaboration on the topic of Code Reviews should read both the article I found on DZone (and other DZone material, frankly), and Clean Code — it’s as popular as it is for a reason.

From the blog CS@Worcester – James Blash by jwblash and used with permission of the author. All other rights reserved by the author.