Author Archives: Timothy Montague

Some simple usability test methods

https://www.infoworld.com/article/3290253/application-development/6-usability-testing-methods-that-will-improve-your-software.html

A/B Testing

A/B testing is a type of testing where there are two different application designs, generally websites, that are tested over time. Data is collected on their performance with some sort of goal in mind such as product sales. If analytics show that one of the designs was better at achieving that goal then one is declared superior and is chosen as the design to go with. This can lead to even more A/B testing against other designs until the development team comes to a decision. The analytics for the test is often times done through third-party tools rather than an in-house solution. An A/B test can be as specific as you want, to the point where you only change a single small element between the two designs or they can be made completely different. It is often best to define a specific problem keeping you from your goal that you want to investigate such as users failing to complete a transaction. A/B testing is extremely clear-cut at providing measurement data for design decisions but can take a large amount of time to conduct the tests and produce the data.

Design Prototype Testing

Design prototype testing can be used to test a complete workflow in a wireframe or fully designed portion of a product before it goes into development. A UX/UI designer will create the prototype and the test will help fix usability issues before the project goes any further. It is important to define a budget for the project as well as the specific goal. Then you need to choose a prototyping tool such as Axure. Third, you will need to choose a measuring tool to gather analytics from the user such as Loop11. It is important that the development team is familiar with such tools to make the test worth the time and work investment put into the test.

Formative Usability Testing

Formative Usability Testing is a type of early-stage testing that focuses more on quality assurance. This test should occur before the first release of the developed product so that it can become the baseline for future tests. With formative usability testing the product will go through a beta test where groups perform the defined usability tests. Test cases are usually written down in order to inform the participants through specific goals in order to get meaningful results. Afterwards it is important to analyze the feedback and make revisions to the product before the official launch. This can be repeated in order to improve the product over time.

 

From the blog CS-443 – Timothy Montague Blog by Timothy Montague and used with permission of the author. All other rights reserved by the author.

For agile testing, fail fast with test impact analysis

https://www.infoworld.com/article/3305326/application-testing/for-agile-testing-fail-fast-with-test-impact-analysis.html

Regression testing is testing a piece of software to make sure any changes or additions have not broken any existing functionality. In agile development it is important to release your product as quickly as possible and because of that it is important to fix any failures quickly and efficiently. Developers need feedback as soon as possible but regression testing can take up to days to complete, so they often settle with CI or scale down testing. As a result, productivity is worse and the close to the deadline sprint is more painful. One common approach to speeding up regression testing is to remove end to end testing which causes developers to miss a lot of issues that have a very large impact on the user experience. Risk-based testing is another way organizations speed up the process but many believe it can require too much overhead. However some amount of risk-based testing has to be done in order to guide prioritization on what needs to be worked on.
Test Impact Analysis is a technique that rapidly exposes issues in code since the last time it was modified and tested. It does this by correlating all regression tests to code and select only the tests associated with the latest round of changes. It then orders those tests based on their likelihood of detecting problems and prioritizes the execution of the ones most likely to expose problems. This helps you find a majority of the problems in an extremely short amount of time. This also results in much tighter feedback loops which means failing builds get resoled faster. Test <> code correlation makes it easier to determine where additional tests need to be taken as well as streamline the defect remediation process. Test impact analysis can be applied to any test including unit tests and automated functional tests. With test impact analysis a specialized set of cases is selected and executed. Static analysis finds which code has been modified since the previous test run, then dynamic analysis correlates the regression test suite with the identified code and regression tests that do not touch the modified code are eliminated for the test run. Additional analysis is then run to determine which tests are most likely to expose a defect in the new code and then sorted so that the test most likely to expose the most problems is executed first then merged correlated code coverage from all of the executed tests is visualized versus the new or modified code map, making it simple to spot coverage gaps.

From the blog CS-443 – Timothy Montague Blog by Timothy Montague and used with permission of the author. All other rights reserved by the author.

3 biggest roadblocks to continuous testing

https://www.infoworld.com/article/3294197/application-testing/3-biggest-roadblocks-to-continuous-testing.html

Continuous testing is the process of executing automated tests as part of the software delivery pipeline to obtain feedback on the buisness risks associated with a software release candidate as rapidly as possible. Many orgaizations have experimented with this testing automation but many of them only celebrate small amounts of success but the process is never expanded upon and that is due to three factors, time and resources, complexity, and results. Teams notoriously underestimate the amount of time and resources required for continuous testing. Teams often create simple UI tests but do not plan ahead for all of the other issues that pop up such as numerous false positives. It is important to keep individual tests synced with the broader test framework while also creating new tests for every new or modified requirement all while trying to automate more advanced cases and keep them running consistently in the continuous testing enviroment. Second, it is extremely difficult to automate certain tasks that require sophisticated set up. You need to make sure that your testing resources understand how to automate tests across different platforms. You need to have secure and compliant test data in order to set up a realistic test as well as drive the test through a complex series of steps.  The third problem is results. The most cited complain with continuous testing is the overwhelming number of false positives that need to be reviewed and addressed. In addition, it can be difficult to interpret the results becasue they do not provide the risk based insight needed to make a decision on whether or not the tested product is ready to be released. The tests results give you the number of successful tests and the number of unsuccessful tests which you can calculate an accuracy from but there are numerous factors that contribute to those unsuccessful tests along with false positives, ultimately the tests will not help you conclude which part is wrong and what needs to bee fixed. Release decisions need to be made quickly and the unclear results make that more difficult.

Looking at these reasons I find continous testing to be a poor choice when it comes to actually trying to test everything in a system. Continuous testing is more for speeding up the proccess for a company rather than releasing a finished product. In a perfect world, the company would allow for enough time to let the team test thoroughly but when it is a race to release a product before you competition, continous testing may be your only option.

From the blog CS-443 – Timothy Montague Blog by Timothy Montague and used with permission of the author. All other rights reserved by the author.

A Closer Look at Integration Testing

https://www.qualitylogic.com/2018/01/11/closer-look-integration-testing/

Integration testing is the process of assembling various code modules to preform very specific tasks in a software system that uses them in a more generalized application. The article gives the example of using a module that stores and retreives files with another that displays their names in a list and another that reads keystrokes as well as one that that recognizes mouse clicks, all together it is a file manaement application. Each module is usually created and tested seperately using unit tests before being added to the code control system. After in the management system they are subjected to integration tests which tests them in conjunction with other modules. This usually uncovers issues with data formats, operation timing, API calls, database access and user interface operation.

There are four parts of integration testing that are critical. The first one is software integration which means in part that everything fuctions the way the user commands. For example, with interface operations it is important that when a user does something unexpected by the systems design the application does not simply crash but produces an error message. Next, the modules must continually exchange data as the overall system functions. This is often performed by a data or command bus that facilitates module communication. These buses are often the most stressed component of the system and integration tests should test them under various amounts of load to ensure that the individual modules are working together properly. Third are the API calls which are the most common channel by which the system communicates with third party services and has become a tool for connecting modules together within the system. This allows data and commands to be separated into different function calls making the exchange easier to debug. One of the main issues with API calls for integration testing is that each one needs to be tested separately, through the entire set of data and inside and outside the acceptable range. Then API calls must be tested in functional groups. Finally there is End-To-End testing which is a big priority for integration testing. The test sets up a system called a sandbox environment and then creates tests for every use the application would go through in real world use. User inputs simulate the front end with data and requests which are then communicated to the middleware to work with the backend of API calls to third parties and database manipulations. All of this is carefully monitored for accuracy and run time.

 

From the blog CS-443 – Timothy Montague Blog by Timothy Montague and used with permission of the author. All other rights reserved by the author.

Defending against Spectre and Meltdown attacks

http://news.mit.edu/2018/mit-csail-dawg-better-security-against-spectre-meltdown-attacks-1018

In January the security vulnerabilities Meltdown and Spectre were discovered. These vulnerabilities were born not from the usually way of software or physical CPU problems but from the architecture of the CPU itself. This means that large amounts of people, buisnessess and more were vulnerable.  With this new method of defense it is much harder for hackers to get away with such attacks. This method of defense may also have an immediate impact on fields like medicine and finance who limit their use of cloud computing due to security concerns. With Meltdown and Spectre, the attackers took advantage of the fact that operations can take different times to compute. For example, someone trying to brute force a password will look at how long it takes for a wrong password to compute and then compare it to another entry and see if it takes longer. If it does then something in the entry that took longer will have a correct number or letter. The normal defense to this attack is Cache Allocation Technology (CAT), which splits up memory so that it is not stored all in one area. Unfortunately this method is still quite insecure because things are still visible to all partitions. This new approach is a form of secure way partitioning called Dynamically Allocated Way Guard (DAWG). Since it is dynamic it can split the cache and then change the size of those different pieces over time. DAWG is able to fully isolate one program from another through the cache and still has comparable performace to CAT. It is able to establish clear boundaries for programs so that when sharing should not happen it does not, this is helpful for programs with sensitive information.

The article mentions that these microarchitectural attaks are becoming more common because other methods of attack have become more difficult. I thought that was interesting because it seems like a relatively new method and a new security risk that has not had time to receive development for security. This is an issue that can effect anyone and is a serious problem. On top of that, performance is a big concern with this security since is deals directly with the CPU and its architecture which is not an easy fix. The article also points out that because of these attacks, more information sharing between applications is not always a good thing. I find this pretty interesting since a large number of different applications made by the same company now have information sharing capabilities such as the microsoft umbrella of software. Sharing information between things can actually put you at more of a risk than it is worth saving time by sharing things.

From the blog CS-443 – Timothy Montague Blog by Timothy Montague and used with permission of the author. All other rights reserved by the author.

Detecting fake news at its source

http://news.mit.edu/2018/mit-csail-machine-learning-system-detects-fake-news-from-source-1004

This is an interesting article about how researchers are trying to make a program that will decide wether or not a news soucre is reliable. This program uses machine learning and scrapes data about a site to makes its determination and it only needs about 150 articles to reliably detect if a source is accurate or not. Researchers first took data from mediabiasfactcheck.com, a site that has human fact checkers that analyze the accuracy of over 2,000 news sites. They then fed that data into their algorithm to teach it how to classify news sites into high, medium or low levels of factuality. As of now, the system was 65 percent accurate at detecting these levels of factuality and 70 percent accurate at deciding if a source is left-leaning, right-leaning or moderate. The researchers determined that the best way to detect fake news were to look at the language used in a sources stories. Fake news sources were likely to use language that is hyperbolic, subjective and emotional. This system was also able to read wikipedia pages on sources that were fake news and noticed that those wikipidia pages contained an abnormal amount of words like extreme or conspiracy theory, even making correlations with the strcuture of a sources URL, sources with lots of special characters and complicated subdirectories were associated with being less reliable.

I think this is noteworthy because of how important accurate information is on the internet. There are a large number of people that spread misinformation on social media and influence the behaviors and thoughts of their readers. There are constantly news stories on bots from Russia or other countries trying to spread misinformation to not only the United States but anyone they consider a threat. The spreading of this fake news can cause a lot of problems and pose a serious issue for the information age to the point that I have heard people refer to our current time as the “misinformation age”. If automated software is able to detect fake news at a near perfect accuracy it will help the entire planet in combating such a large and seemingly unbeatable problem. Something that takes hundreds of fact checkers could take a program a few minutes and it could be accesible to the general public for all of thier needs. Although the algorithm in its current for is not accurate enough, it is a step forward and a look at a possible solution that we desperately need.

From the blog CS-443 – Timothy Montague Blog by Timothy Montague and used with permission of the author. All other rights reserved by the author.

Model helps robots navigate more like humans do

http://news.mit.edu/2018/model-helps-robots-navigate-like-humans-1004

A paper describing a model where researchers combined a planning algorithm with a neural network that learns to recognize paths that lead to the best outcome, then use that knowledge to guide a robot through an enviroment.  The researchers demonstrate their model in two settings, navigating through rooms that have traps and narrow passageways or navigating through a room without any collisions. This learns by being shown a few examples of similar eviroments and then bases its actions on that. For example, if it recognizes a door it will know to exit through it based on the learned examples all exiting through a door. This model combines older, more common methods with this new look at machine learning. The planner creates a search tree while the neural network mirrors each step and makes a prediction based on probabilities for possible actions to take. If the network has high confidence of success it will act on it, otherwise it will fall back on exploring the enviroment like the tradititional method. One application for this model is autonomous cars where there are multiple agents all operating at the same time in the same space. For autonomous cars intersections and especially roundabouts are extremly challenging since there are numerous cars moving around a circle going in and out all at once. Results indicate that this model of machine learning can learn enough behavior based on previous experience to be able to navigate something as that challenging. In addition, they only needed a few examples of very few cars in a roundabout to be successful.

I find this article to be interesting since autonomous cars are going to be extremely popular in the future. This model is another advancement toward getting self driving cars even more independent to the point of something from a sci-fi movie. In addition to that, having something to efficiently navigate a room is important for other applications such as assisting somone that is blind. Something like this could give someone who has to rely someone else more independance if all they need to do is wear some glasses that tell them where objects in the room are located and how to navigate around them.

From the blog CS-443 – Timothy Montague Blog by Timothy Montague and used with permission of the author. All other rights reserved by the author.

Machine-learning system tackles speech and object recognition, all at once

http://news.mit.edu/machine-learning-image-object-recognition-0918

Speech recognition systems usually require hundreds of thousands of transcripts in order to work properly. This new model of machine learning learns through audio visual associations similar to how a child would learn, where they correlate speech with related images. The researchers then modified the model to associate specific words with specific pixels. It works by dividing the image into a grid of cells consisting of patches of pixels while dividing the audio portion into segments of the spectrogram. It then compares each image cell to each audio segment and produces a similarity score for each individual one. The researches call this comparison method a “matchmap”. One good use of this is learning translations between all of the languages on the planet. There are an estimated total of 7,000 languages spoken wordwide and only about 100 have trascription data for speech recognition. With this model, two different language speakers can describe the same image and the machine can learn the speech signals of the two languages and match the words, making them translations of one another. This is interesting because that means the model does not require actual text to learn to translate. In languages where things are not commonly written down, the machine can translate meanings where other methods that are common today cannot.

This method is important to note because machine learning is a growing topic in the world of computer science and this could open up all kinds of possibilities. It is a new and innovative way to try to solve a problem and might become something needed for future jobs or projects in life. With this matchmap system, speech recognition no longer needs to be manually taught hundreds of thousands of transcriptions and examples of those transcriptions in order to function properly. This is increasingly important since new words enter our dictionary and become common for people over time. Currently, the machine can only recognize a few hundred words but in the future it could help advance the machine learning field while also improving the speech recognition software that exists such as siri.

 

From the blog CS-443 – Timothy Montague Blog by Timothy Montague and used with permission of the author. All other rights reserved by the author.

Introductory Blog Post

Hello my name is Timothy Montague and this is my first blog post for CS-443

From the blog CS-443 – Timothy Montague Blog by Timothy Montague and used with permission of the author. All other rights reserved by the author.