Monthly Archives: December 2018

A Brief Look at Mocking

We’ve played around with different testing concepts like mocks and stubs in class, however since we didn’t dive too far into it, I decided I’d look a little bit deeper into the subject of mocking. I found this great overview of it on Michael Minella‘s blog.

Before diving into why mocking is helpful, let’s look at the purpose of mocks in the first place and what they even are. In class we had an example where a student object had a transcript object, and in order to get the student’s remaining credits or current GPA, the student would reference the transcript’s methods. Something like this:

If we wanted to test the Student’s getGPA() method, it would involve calling transcript’s getGPA() method. However, we don’t want to test transcript’s method, we want to test Student’s. How do we know Student.getGPA() does the intended thing, which is call transcript.getGPA()? In his blog post, Michael uses the example of a test that uses the ArrayList object in Java. When we write a test that uses an ArrayList, we don’t want to test the functionality of ArrayList — we want to test our implementation of it and trust that ArrayLists function correctly. Mocks are designed to solve this problem.

A mock is a temporary object that is used in place of the real object that would be used during a test. The mock has default values for all of it’s data types, and so whether or not a test passes is not dependent on the state of the other objects or methods inside of it. In the case above, transcript.getGPA() would return a value of 0.0 if a mock transcript object had been set up beforehand. This is helpful because it allows us to remove the potential puzzle of the functionality of the transcript’s methods and solely focus on the what the student’s getGPA() is actually doing. 

There are plenty of different mocking libraries to choose from (we used Mockito in class) and they all function in slightly different ways. The subject of mocking is a deep one, and there are seemingly infinite other concepts like stubs, fakes, dummies, and more which can all be used to ensure that your tests test are as thorough and reliable as possible. 

From the blog CS@Worcester – James Blash by jwblash and used with permission of the author. All other rights reserved by the author.

Black-Box vs White-Box Testing

Source: https://www.guru99.com/back-box-vs-white-box-testing.html

This week’s reading is about the differences between black-box and white-box testing. For starters, it states that in black-box testing, the tester does not have any information about what goes on inside the software. So, it mainly focuses on tests from outside or on the level of the end-user. In comparison, white box testing allows the tester to check within the software. They will have access to the code, in this case, another name for white-box testing is code-based testing. In this article, the differences in each type of test is listed in a table format. Let it be known, that the bases of testing, usage, automation, objective and many other categories will be different. For example, black-box testing is stated to be ideal for testing like system testing and acceptance testing. While white-box is much more suited for unit-testing and integration testing. The many advantages and disadvantages of each method are clearly defined and provides a clear consensus on how each method will pan out.

What I found useful about this article is the clear and concise language it uses for describing each category for each category. Unlike other articles I’ve come across about the topic, they beat around the bush and make it difficult to discern the importance of each type of testing. Many of the information provided by this article can be supported by activities done in class. One of the categories labeled time labeled black-box testing as less exhaustive and time-consuming, while white-box is the very opposite. I somewhat agree with this description as with white-box testing, you will have much more information to work with. Every detail of code can in some way be processed into a test as deemed necessary. The overall quality of the code as stated, is being checked during this test. While in black-box testing, the main objective is to test the functionality, which means it’s not as extensive of a test in general. Also, what struck as interesting was the category Granularity. With a single google search, yielded the meaning “the scale or level of detail present in a set of data”. Low for black-box and high for white-box, which rings true for both tests. In conclusion, this article reinforces prior knowledge on the differences between black-box and white-box testing.

From the blog CS@Worcester – Progression through Computer Science and Beyond… by Johnny To and used with permission of the author. All other rights reserved by the author.

A Bugs Life : How to Reproduce One

For this week’s blog post I found a blog that shows you how to reproduce a bug in your application. Why is this important? Well if you see something wrong in your application but cannot reproduce it this blog may help you with it. An obvious but important step to undertake is to simply gather information, and as much of it’s you possible can about the circumstances of the issue you are looking for.  Retracing the steps just before the bug appeared is a good way to do this the same goes if someone else reports the bug. Pertaining to another person reporting the bug is it a different operating system their using? A different browser or a different browser version? Again, obvious questions to some but to others they might not think about it. Keeping track of what changes, you have made trying to reproduce the bug is very important, knowing what you’ve tried and haven’t is crucial. I know this is something I would probably not keep track of which would cause me a lot of frustration. Going off of this using logs and developer tools are extremely helpful in providing a sense of direction in the behind the scenes of the application. If using a browser, especially chrome, you can simply go to the top right and find all the developer tools by clicking the three-dot menu. The blog then lists numerous factors to consider when trying to reproduce a bug, here I will discuss a few. Going back to the user end of bug, did the user not have the correct permission or a specific permission level? If so, you may be dealing with a bug that is only seen on an administrator level or by a certain type of customer. Authentication may be something to take into factor if the user cannot log or such. what is the state of the data that the user has? Can you reproduce the state exactly? The bug may only appear when a user has a very long last name or a particular type of image file. Another simple factor is configuration-based issues. Something in the application may not be set up properly. For example, a user who isn’t getting an email notification might not be getting it due to email functionality being turned of for their account. Checking all of the configurable settings in the application and trying to reproduce the issue with exactly the same configuration is the best way to check for this. Another issue some may not think about are the race conditions. The best way to determine if there is a race condition is to run your tests several times and document the inconsistent behavior. Clicking on buttons or links before a page has loaded in order to speed the input up or throttling the internet connection to slow the input down are good ideas for this. The last one they list is a simple machine or device based issue. This is probably the most common one that we face in all honestly, we’ve come across it a few times this semester even between Mac and Windows. Essentially something is working on Mac then is transferred over to a Windows device and does not work since it was originally made for a Mac and didn’t consider something small that windows does not have and more. After reproducing the bug you want to narrow the steps down to make it as efficient as possible making it easier for either you or the developer to fix in the code in the future.

All and all I thought this blog was actually pretty helpful at explaining steps on how to reproduce bugs and to help find them and make your testing process a bit smoother..

http://thethinkingtester.blogspot.com/2018/12/how-to-reproduce-bug.html

From the blog CS@Worcester – Matt's Blog by mattyd99 and used with permission of the author. All other rights reserved by the author.

What is an Anti-Pattern?

In this week blog, I’m going to write about anti-pattern, mainly because I have seen the term a lot and think to myself that I should at least know something about it. This week blog is based on this article.

For a start, we need to understand what algorithm and pattern is. Algorithm has been around for a really long time and considered to be the most fundamental concepts in software engineering. The early concept of algorithm as it is written in “Fundamental Algorithms” by Donald Knuth, which was published in 1968, mainly provides Calculus-based proofs of its solution and code examples in outdated language like Algol or MIX Assembly. However, a lot of the subject was covered is still widely used today, for example like singly / double linked list, garbage collection or trees, etc. and it remains to be valid solutions to common software engineering problems for more than 5 decades and still going strong.

A “pattern” can be seen as a more general form of an algorithm, while algorithm may focusing on a specific programming task, pattern would offer more advantages in areas such as reducing defect rates, allowing an effective teamwork or even increasing the maintainability of the code. Four common patterns that are used in the industry are:

   – Factories: A concept in object-oriented programming that would eliminates the need for the creator of an object to know everything about it ahead of time, in other words, it would generalize the object as a square instead of knowing what details that contains in it. This pattern certainly simplify the program and encourage more efficient teamwork.
 
   – Pub / Sub: It is an async process between sender and receiver. Instead of having the sender to send the messages directly to the receivers, the sender publishes the messages to a queue. This pattern allowed multiple receivers to subscribe and retrieve those messages. The queue also have the ability to handles details such as transmission errors or resending messages. While factories pattern is more code-oriented pattern, pub/sub suggest to be more architectural in nature.

   – Public-key Crytography: A really useful and commonly used for security like in SSH or git server. It is a mechanism that allows two parties to communicate securely without interception without exchanging the secret encryption keys. Usually, each party maintain a pair of keys (public and private), and the public key can be shared anytime as needed.

   – Agile: A philosophy that establishes a set of guiding principle for software development to emphasize the customer satisfaction. It also embrace the need for flexibility and collaboration and promote the simple and sustainable development practices.

If a “pattern” suggest the solution to common software engineering problems, then should “anti-pattern” be the exact opposite? Anti-pattern actually is a term to establishes the concept of failure to do the right thing, maybe some of the choice in the development process seems right, but they would lead to problems in a long term. Wikipedia define anti-pattern as a common response to recurring problem that is usually ineffective and risks being highly counterproductive. As the sentence proposes that this is a common mistakes and they are nearly always followed with good intentions. Just like pattern, anti-pattern can be really broad or specific, depends on what we are considering.

From the blog #Khoa'sCSBlog by and used with permission of the author. All other rights reserved by the author.

Typescript vs JavaScript

https://stackify.com/typescript-vs-javascript-migrate/

Given that we have been working with Typescript I wanted to find a blog post that compare and contrasts these languages to grasp a better understanding of their benefits. First we should define what the languages are. Typescript is an open source “syntactic superset” of JavaScript that compiles to JavaScript. Typescript offers the developer a static type checking for JavaScript to be compiled. Where JavaScript is almost the opposite of this; JavaScript uses type less variables that can be assigned any data. That is a basic understanding of the two languages that are based from Java.

From here lets discus the benefits of using Typescript language as stated above Typescript allows for the assignment of static types. This prevents complications like data being changed if not meant to, or faulty variable assignment to begin with.

An Example:

Typescript when a is an int:  a = 1;

Javascript when a is typeless: a = “1”

The thing is that JavaScript would allow you to assign the value to string 1, but Typescript would not allow you to make the same assignment. Additionally Typescript also adds other features to JavaScript like: Interfaces, Generics, Namespaces, Null Checking, and Access Modifiers.

Interestingly the blog talks about how type values are not the only reason to use typescript, or at least the exclusive reason. When deciding to use typescript the blog offers this advice:

TypeScript

  • Prefer Compile Time Type Checking: It is entirely possible to perform runtime type verification using vanilla JavaScript. However, this introduces additional runtime overhead that could be avoided by performing compile-time validation
  • Working with a New Library or Framework: Let’s suppose you’re taking up React for a new project. You are not familiar with React’s APIs, but since they offer type definitions, you can get intellisense that will help you navigate and discover the new interfaces.
  • Large Projects or Multiple Developers: TypeScript makes the most sense when working on large projects or you have several developers working together. Using TypeScript’s interfaces and access modifiers can be invaluable in communicating APIs (which members of a class are available for consumption).

JavaScript

  • Build Tools Required: TypeScript necessitates a build step to produce the final JavaScript to be executed. However, it is becoming increasingly rare to develop JavaScript applications without build tools of any kind.
  • Small Projects: TypeScript may be overkill for small teams or projects with a small code surface area.
  • Strong Testing Workflow: If you have a strong JavaScript team who is already implementing test-driven development, switching to TypeScript may not give you enough to make it worth the associated costs.
  • Added Dependencies: In order to use libraries with TS, you will need their type definitions. Every type definition means an extra npm package. By depending on these extra packages you are accepting the risk that these may go un-maintained or may be incorrect. If you choose not to import the type definitions, you are going to lose much of the TS benefit. Note that the DefinitelyTyped project exists to mitigate these risks. The more popular a library is, the more likely the type definitions are to be maintained for the foreseeable future.
  • Framework Unsupported: If your framework of choice does not support TS, such as EmberJS (although this is planned and is the language of choice for Glimmer), then you may be unable to take advantage of its features.

The author Jared Nance goes on to say that Typescript isn’t the best tool out there for coding, but presents the information as best as possible. When considering to use Typescript or JavaScript the suggested reasons above may be good guide lines in making that choice.

From the blog CS@Worcester – Computing Finn by computingfinn and used with permission of the author. All other rights reserved by the author.

Useful Wireframes

 

In this weeks blog I’m going to step back and go to the basics of design and take a look at effective wireframe design and implementation. Wireframes are the backbone of a project and help the team of developers show their ideas on layouts, visual hierarchies, content, and information architecture. Wireframes use placeholders to represent the content that will come later in the development process. Building wireframes has a variety of benefits such as knowing where everything will go before implementing the exact details, the ability to create foundations early on in the process, and the wireframes simplicity allows for creativity and experimentation.

One of the best things about wireframes is the availability and ease of making one.  While they do have specialized wireframing tools and image editors a wireframe can be simply made with a piece of paper and a pen.

1

The first step in making a wireframe the article suggests is to create a content inventory. The content inventory allows the developer to get all their materials in one place before building. Inventories are usually spreadsheets that list all the separate content that will be including on the page, this can include things such as a list of URLs with descriptions. The content should be organized by topic and should be allocated to the correct page. Next, it is recommended to create a visual hierarchy by marking each item by importance as primary or secondary.

2

We can then begin to make the wireframe itself. We start by making a content wireframe which will allow us to place all our content and prioritize the essential elements. It is recommended to start with a mobile-first approach when making an application for this reason, as you can always scale upwards if needed. Once all the content is generally in place, it is time to “sculpt” the wireframe by designating where individual links and icons will go and setting space and sizes for images. It is then recommended to turn the wireframe into a lo-fi prototype so that testing can begin. Certain platforms allow drag and drop interactivity which helps designate usability issues early on.

3

This method allows for rapid prototyping which involves creating prototypes quickly, testing them, and then incorporating the feedback into the next build. The design refines as you go instead of testing everything at once. This leads to much more efficient wireframes and overall applications.

From the blog CS@Worcester – Jarrett's Computer Science Blog by stonecsblog and used with permission of the author. All other rights reserved by the author.

Journey into Schematics-An Introduction

As I take another step towards my journey in software C.D.A. I dive into an Introduction on Angular Schematics. This weeks blog I will be summarizing the blog “Schematics – An Introduction” by Hans.

This blog gives an introduction to Schematics and what it is. It list the goals of using a Schematic design. It shares how to understand schematics and how to create your first schematic. It tells you about the Schematics Collection and how to use it. Where it goes over the files such as Rules and Trees.

  • “ARuleis a function that takes a Tree and returns another Tree"
  • “A Tree contains the files that your schematics should be applied on”

The blog also shows you how to run your new Schematics. It also mention that when in debug mode you will not be able to create a file to the file system because “schematics tool is in debug mode when using a path as the collection it should use”, but also “the default is also to run in dry run mode, which prevents the tool from actually creating files.” that can be changed by changing the default argument. The blog has an example on how to do so. It then explains that a great advantage of Schematics is that calling another schematic is very easy and having them compose together is just as easy as well. It also mention that “the best usage of Schematics for your users is currently through the Angular CLI.”

This is a well written blog that gave good examples that are understandable and on point. I found the idea of schematics very interesting and also useful because it is capable of creating new component or updating code in order to fix and break dependency. It also provides ease of use and development when working with web application in particularly Angular. This has made me consider using Schematic since it is very beneficial to do so. This blog has thought me a new topic not really covered in class. I can honestly say that there is not a single thing I disagree on with the blog. For more on the blog and its example check it out by just clicking on the title of the blog, it will link you to it. Thank you for your time. This has been YessyMer in the World Of Computer Science, until next time.

From the blog cs@Worcester – YessyMer In the world of Computer Science by yesmercedes and used with permission of the author. All other rights reserved by the author.

Journey into Angular Schematics: Unit Testing

As I take another step towards Software Quality Assurance Testing. I dive into Angular schematics for unit testing. The blog that I read that covers this topic was “Angular Schematics: Unit Testing by Jonathan Campos. This blog covers the basic of unit testing code, mimicking an application environment, adding a test, asserting on files created, and asserting on content created. All in order to discuss and show the methods of creating a unit test for an Angular Schematics. This is a well written blog that gave good examples that are understandable and on point. The blog has thought me that angular projects should and can be tested. Which would be something I will be practicing on any of my future angular project. In the following paragraph I will briefly summarize what was cover in the blog excluding the examples.

The basic unit testing code is just explaining the basic of unit testing, and it being the starting point step. It then goes on and shows an example of how a file should look like when the schematics generator generates it. That file does not do much it just “run the schematic in a silo and asserts that the output is an empty file tree”.  It goes on to explain that to test your code accurately you must go further more by mimicking the application environment. This is done by adding more to the unit test so it can reflect the actual environment where the code will be running. To do this some of the Angular Schematics must be run in order “to build up the Angular project workplace”. This can be done by adding code to the Angular Schematics that specifies the set option it should run with. But before anything is run an application file tree must be created. That way before each test the code will run and create the Angular project workspace. Once that is done you can add test to the testing environment. The blogger gives example of what happens in the test by creating three different testing scenario. First he test that there was any error thrown because in that test he is testing a Schematic being ran without an Angular project tree setup. Second he is testing what happens when a Schematic is ran without the required parameters. Third he is testing that everything is setup properly. Once the test were ready the next step was asserting the test for specific values that would be for either asserting on files created or asserting on content created. He goes on and explain a little more in the blog and provides examples.

For more on the blog and its example check it out by just clicking on the title of the blog, it will link you to it. Thank you for your time. This has been YessyMer in the World Of Computer Science, until next time.

From the blog cs@Worcester – YessyMer In the world of Computer Science by yesmercedes and used with permission of the author. All other rights reserved by the author.

Premature optimization

I am writing in response to the blog post at https://9clouds.com/blog/premature-optimization-the-root-of-all-evil/ titled “Premature Optimization: The Root of All Evil”.

Premature optimization is about focusing on making sure the code will be able to run as fast as possible before anything actually works yet. It is time consuming and, by definition, “premature”, so it is not a good thing to do. The blog post quotes Donald Knuth who said “Premature optimization is the root of all evil.” For sizable projects, premature optimization is practically procrastination. It is about making sure that the program works well in theory before making sure that it works at all. The blost post seems to be referring to premature optimization on the scale of businesses, where I am only considering the effect on individual programs and projects, but the effects are the same. It makes more sense to finish something that works and then improve it than it does to improve it first and then finish it.

I have done a lot of premature optimization on small projects, and it certainly does take up a lot of time. I tend to do it just because it is interesting, though, to try to come up with such an implementation that performs optimally. It is a secondary challenge that puts off the original task. If it becomes tedious or starts to seem wasteful then I just write something that works and move on to the next part, and if it matters or makes a difference, and it never does, I can just look into how to make things faster again. The idea of actually producing something in a professional sense and becoming caught up in the performance details on a small scale before the rest of the product is finished definitely seems like it would be a waste of time. Premature optimization requires making predictions of how things are going to work after they are made, and devising a plan to make things fit together smoothly before the things even exist yet. It makes much more sense to make the things first, and then make them work together better afterwards.

From the blog cs-wsu – klapointe blog by klapointe2 and used with permission of the author. All other rights reserved by the author.

Black/White box testing

I am writing in response to the blog post at https://www.guru99.com/back-box-vs-white-box-testing.html titled “Black Box Testing Vs. White Box Testing: Key Differences”.

Black box testing and white box testing are topics that we have covered in the CS 443 software quality assurance and testing class. Black box testing involves testing from the perspective of a user, who does not have access to the code, and thus the testing is done without referring to the code. Instead, the behavior is tested directly by checking inputs against expected outputs. White box testing involves code coverage and testing different branches and paths of code based on the code itself and not a pre-defined specification for the behavior to meet.

The blog post provides a table of comparisons between black box testing and white box testing. Most of the points are that black box testing is done without access to the code and white box testing focuses on the code. One of the points made is that black box testing is not good for testing algorithms. This makes a lot of sense, and for certain algorithms, black box testing would be impossible to do. Black box testing requires some specification for the behavior of the program to meet, and that specification is supposed to cover everything that might go wrong with the program. This is fine when the program is made up of a set of conditional branches, but something more intricate like a computational geometry or machine learning algorithm would be impossible to test every case for because the number of different paths the code can take is on a completely different scale. White box testing in this case would be the only way to check that the algorithm will work, and that is by proving that it works. The only way black box testing could prove correct behavior would be with an exhaustive proof.

White box testing is not necessarily as easy or convenient as black box testing because it requires understanding and analyzing the code instead of just running it and seeing what happens. In some cases, though, it is necessary to use.

From the blog CS@Worcester – klapointe blog by klapointe2 and used with permission of the author. All other rights reserved by the author.