Category Archives: Week 13

Software Blog

CS SERIES (12).pngThere are lots of “rules” we must follow in object-oriented software development and the article The Genius of the Law of Demeter by Javadevguy summarizes how they are useful. From what I put together, it seems like the Law of Demeter took abstract concepts and basically put them into a universal set of rules for Object-Oriented code.

I thought that the law of demeter must be a big deal if someone decided to sit down and write a lengthy blog post about it. This content ended up being interesting as it tried to convince readers why they should obey this “law.” The Law of Demeter basically paves the way for what users can do to a given method. It is kind of like considering the restrictions or possibilities based on the method. One of the takeaways I got from this is how there is a lot of focus on communication between two objects.

The rules listed in the article are as follows–noting that it says “For all classes C, and for all methods M attached to C, all objects to which M sends a message must be”:

  1. self (this in Java)
  2. M’s argument objects
  3. Instance variable objects of C
  4. Objects created by M, or by functions or methods which M calls
  5. Objects in global variables (static fields in Java)

Another useful takeaway I got from this article came from observing the code examples Javadevguy included; how Rule #1 covers that any method can be called on the current object. I also noted when there would be an instance where the law would prohibit something is not “sending a message” to any already existing object that is held in instance variables of other classes.

This will affect the way I continue to do work as an Object-Oriented developer. I mean, in life when you learn there is a more useful or structured way to help you achieve more effective results, you would want to try or follow it, right? For my future use, I will acknowledge (like other blogs and articles) that the Law of Demeter is less of a law and only a suggestion or a guideline.

Overall, I would say that the content has helped further solidified my understanding of some Object-Oriented coding concepts. I agree with the content as it is trying to help people become better developers or better understand the Law of Demeter in general.


Article: https://javadevguy.wordpress.com/2017/05/14/the-genius-of-the-law-of-demeter/

From the blog CS@Worcester by samanthatran and used with permission of the author. All other rights reserved by the author.

A Brief Look at Mocking

We’ve played around with different testing concepts like mocks and stubs in class, however since we didn’t dive too far into it, I decided I’d look a little bit deeper into the subject of mocking. I found this great overview of it on Michael Minella‘s blog.

Before diving into why mocking is helpful, let’s look at the purpose of mocks in the first place and what they even are. In class we had an example where a student object had a transcript object, and in order to get the student’s remaining credits or current GPA, the student would reference the transcript’s methods. Something like this:

If we wanted to test the Student’s getGPA() method, it would involve calling transcript’s getGPA() method. However, we don’t want to test transcript’s method, we want to test Student’s. How do we know Student.getGPA() does the intended thing, which is call transcript.getGPA()? In his blog post, Michael uses the example of a test that uses the ArrayList object in Java. When we write a test that uses an ArrayList, we don’t want to test the functionality of ArrayList — we want to test our implementation of it and trust that ArrayLists function correctly. Mocks are designed to solve this problem.

A mock is a temporary object that is used in place of the real object that would be used during a test. The mock has default values for all of it’s data types, and so whether or not a test passes is not dependent on the state of the other objects or methods inside of it. In the case above, transcript.getGPA() would return a value of 0.0 if a mock transcript object had been set up beforehand. This is helpful because it allows us to remove the potential puzzle of the functionality of the transcript’s methods and solely focus on the what the student’s getGPA() is actually doing. 

There are plenty of different mocking libraries to choose from (we used Mockito in class) and they all function in slightly different ways. The subject of mocking is a deep one, and there are seemingly infinite other concepts like stubs, fakes, dummies, and more which can all be used to ensure that your tests test are as thorough and reliable as possible. 

From the blog CS@Worcester – James Blash by jwblash and used with permission of the author. All other rights reserved by the author.

Black-Box vs White-Box Testing

Source: https://www.guru99.com/back-box-vs-white-box-testing.html

This week’s reading is about the differences between black-box and white-box testing. For starters, it states that in black-box testing, the tester does not have any information about what goes on inside the software. So, it mainly focuses on tests from outside or on the level of the end-user. In comparison, white box testing allows the tester to check within the software. They will have access to the code, in this case, another name for white-box testing is code-based testing. In this article, the differences in each type of test is listed in a table format. Let it be known, that the bases of testing, usage, automation, objective and many other categories will be different. For example, black-box testing is stated to be ideal for testing like system testing and acceptance testing. While white-box is much more suited for unit-testing and integration testing. The many advantages and disadvantages of each method are clearly defined and provides a clear consensus on how each method will pan out.

What I found useful about this article is the clear and concise language it uses for describing each category for each category. Unlike other articles I’ve come across about the topic, they beat around the bush and make it difficult to discern the importance of each type of testing. Many of the information provided by this article can be supported by activities done in class. One of the categories labeled time labeled black-box testing as less exhaustive and time-consuming, while white-box is the very opposite. I somewhat agree with this description as with white-box testing, you will have much more information to work with. Every detail of code can in some way be processed into a test as deemed necessary. The overall quality of the code as stated, is being checked during this test. While in black-box testing, the main objective is to test the functionality, which means it’s not as extensive of a test in general. Also, what struck as interesting was the category Granularity. With a single google search, yielded the meaning “the scale or level of detail present in a set of data”. Low for black-box and high for white-box, which rings true for both tests. In conclusion, this article reinforces prior knowledge on the differences between black-box and white-box testing.

From the blog CS@Worcester – Progression through Computer Science and Beyond… by Johnny To and used with permission of the author. All other rights reserved by the author.

A Bugs Life : How to Reproduce One

For this week’s blog post I found a blog that shows you how to reproduce a bug in your application. Why is this important? Well if you see something wrong in your application but cannot reproduce it this blog may help you with it. An obvious but important step to undertake is to simply gather information, and as much of it’s you possible can about the circumstances of the issue you are looking for.  Retracing the steps just before the bug appeared is a good way to do this the same goes if someone else reports the bug. Pertaining to another person reporting the bug is it a different operating system their using? A different browser or a different browser version? Again, obvious questions to some but to others they might not think about it. Keeping track of what changes, you have made trying to reproduce the bug is very important, knowing what you’ve tried and haven’t is crucial. I know this is something I would probably not keep track of which would cause me a lot of frustration. Going off of this using logs and developer tools are extremely helpful in providing a sense of direction in the behind the scenes of the application. If using a browser, especially chrome, you can simply go to the top right and find all the developer tools by clicking the three-dot menu. The blog then lists numerous factors to consider when trying to reproduce a bug, here I will discuss a few. Going back to the user end of bug, did the user not have the correct permission or a specific permission level? If so, you may be dealing with a bug that is only seen on an administrator level or by a certain type of customer. Authentication may be something to take into factor if the user cannot log or such. what is the state of the data that the user has? Can you reproduce the state exactly? The bug may only appear when a user has a very long last name or a particular type of image file. Another simple factor is configuration-based issues. Something in the application may not be set up properly. For example, a user who isn’t getting an email notification might not be getting it due to email functionality being turned of for their account. Checking all of the configurable settings in the application and trying to reproduce the issue with exactly the same configuration is the best way to check for this. Another issue some may not think about are the race conditions. The best way to determine if there is a race condition is to run your tests several times and document the inconsistent behavior. Clicking on buttons or links before a page has loaded in order to speed the input up or throttling the internet connection to slow the input down are good ideas for this. The last one they list is a simple machine or device based issue. This is probably the most common one that we face in all honestly, we’ve come across it a few times this semester even between Mac and Windows. Essentially something is working on Mac then is transferred over to a Windows device and does not work since it was originally made for a Mac and didn’t consider something small that windows does not have and more. After reproducing the bug you want to narrow the steps down to make it as efficient as possible making it easier for either you or the developer to fix in the code in the future.

All and all I thought this blog was actually pretty helpful at explaining steps on how to reproduce bugs and to help find them and make your testing process a bit smoother..

http://thethinkingtester.blogspot.com/2018/12/how-to-reproduce-bug.html

From the blog CS@Worcester – Matt's Blog by mattyd99 and used with permission of the author. All other rights reserved by the author.

Journey into Schematics-An Introduction

As I take another step towards my journey in software C.D.A. I dive into an Introduction on Angular Schematics. This weeks blog I will be summarizing the blog “Schematics – An Introduction” by Hans.

This blog gives an introduction to Schematics and what it is. It list the goals of using a Schematic design. It shares how to understand schematics and how to create your first schematic. It tells you about the Schematics Collection and how to use it. Where it goes over the files such as Rules and Trees.

  • “ARuleis a function that takes a Tree and returns another Tree"
  • “A Tree contains the files that your schematics should be applied on”

The blog also shows you how to run your new Schematics. It also mention that when in debug mode you will not be able to create a file to the file system because “schematics tool is in debug mode when using a path as the collection it should use”, but also “the default is also to run in dry run mode, which prevents the tool from actually creating files.” that can be changed by changing the default argument. The blog has an example on how to do so. It then explains that a great advantage of Schematics is that calling another schematic is very easy and having them compose together is just as easy as well. It also mention that “the best usage of Schematics for your users is currently through the Angular CLI.”

This is a well written blog that gave good examples that are understandable and on point. I found the idea of schematics very interesting and also useful because it is capable of creating new component or updating code in order to fix and break dependency. It also provides ease of use and development when working with web application in particularly Angular. This has made me consider using Schematic since it is very beneficial to do so. This blog has thought me a new topic not really covered in class. I can honestly say that there is not a single thing I disagree on with the blog. For more on the blog and its example check it out by just clicking on the title of the blog, it will link you to it. Thank you for your time. This has been YessyMer in the World Of Computer Science, until next time.

From the blog cs@Worcester – YessyMer In the world of Computer Science by yesmercedes and used with permission of the author. All other rights reserved by the author.

Journey into Angular Schematics: Unit Testing

As I take another step towards Software Quality Assurance Testing. I dive into Angular schematics for unit testing. The blog that I read that covers this topic was “Angular Schematics: Unit Testing by Jonathan Campos. This blog covers the basic of unit testing code, mimicking an application environment, adding a test, asserting on files created, and asserting on content created. All in order to discuss and show the methods of creating a unit test for an Angular Schematics. This is a well written blog that gave good examples that are understandable and on point. The blog has thought me that angular projects should and can be tested. Which would be something I will be practicing on any of my future angular project. In the following paragraph I will briefly summarize what was cover in the blog excluding the examples.

The basic unit testing code is just explaining the basic of unit testing, and it being the starting point step. It then goes on and shows an example of how a file should look like when the schematics generator generates it. That file does not do much it just “run the schematic in a silo and asserts that the output is an empty file tree”.  It goes on to explain that to test your code accurately you must go further more by mimicking the application environment. This is done by adding more to the unit test so it can reflect the actual environment where the code will be running. To do this some of the Angular Schematics must be run in order “to build up the Angular project workplace”. This can be done by adding code to the Angular Schematics that specifies the set option it should run with. But before anything is run an application file tree must be created. That way before each test the code will run and create the Angular project workspace. Once that is done you can add test to the testing environment. The blogger gives example of what happens in the test by creating three different testing scenario. First he test that there was any error thrown because in that test he is testing a Schematic being ran without an Angular project tree setup. Second he is testing what happens when a Schematic is ran without the required parameters. Third he is testing that everything is setup properly. Once the test were ready the next step was asserting the test for specific values that would be for either asserting on files created or asserting on content created. He goes on and explain a little more in the blog and provides examples.

For more on the blog and its example check it out by just clicking on the title of the blog, it will link you to it. Thank you for your time. This has been YessyMer in the World Of Computer Science, until next time.

From the blog cs@Worcester – YessyMer In the world of Computer Science by yesmercedes and used with permission of the author. All other rights reserved by the author.

Premature optimization

I am writing in response to the blog post at https://9clouds.com/blog/premature-optimization-the-root-of-all-evil/ titled “Premature Optimization: The Root of All Evil”.

Premature optimization is about focusing on making sure the code will be able to run as fast as possible before anything actually works yet. It is time consuming and, by definition, “premature”, so it is not a good thing to do. The blog post quotes Donald Knuth who said “Premature optimization is the root of all evil.” For sizable projects, premature optimization is practically procrastination. It is about making sure that the program works well in theory before making sure that it works at all. The blost post seems to be referring to premature optimization on the scale of businesses, where I am only considering the effect on individual programs and projects, but the effects are the same. It makes more sense to finish something that works and then improve it than it does to improve it first and then finish it.

I have done a lot of premature optimization on small projects, and it certainly does take up a lot of time. I tend to do it just because it is interesting, though, to try to come up with such an implementation that performs optimally. It is a secondary challenge that puts off the original task. If it becomes tedious or starts to seem wasteful then I just write something that works and move on to the next part, and if it matters or makes a difference, and it never does, I can just look into how to make things faster again. The idea of actually producing something in a professional sense and becoming caught up in the performance details on a small scale before the rest of the product is finished definitely seems like it would be a waste of time. Premature optimization requires making predictions of how things are going to work after they are made, and devising a plan to make things fit together smoothly before the things even exist yet. It makes much more sense to make the things first, and then make them work together better afterwards.

From the blog cs-wsu – klapointe blog by klapointe2 and used with permission of the author. All other rights reserved by the author.

Black/White box testing

I am writing in response to the blog post at https://www.guru99.com/back-box-vs-white-box-testing.html titled “Black Box Testing Vs. White Box Testing: Key Differences”.

Black box testing and white box testing are topics that we have covered in the CS 443 software quality assurance and testing class. Black box testing involves testing from the perspective of a user, who does not have access to the code, and thus the testing is done without referring to the code. Instead, the behavior is tested directly by checking inputs against expected outputs. White box testing involves code coverage and testing different branches and paths of code based on the code itself and not a pre-defined specification for the behavior to meet.

The blog post provides a table of comparisons between black box testing and white box testing. Most of the points are that black box testing is done without access to the code and white box testing focuses on the code. One of the points made is that black box testing is not good for testing algorithms. This makes a lot of sense, and for certain algorithms, black box testing would be impossible to do. Black box testing requires some specification for the behavior of the program to meet, and that specification is supposed to cover everything that might go wrong with the program. This is fine when the program is made up of a set of conditional branches, but something more intricate like a computational geometry or machine learning algorithm would be impossible to test every case for because the number of different paths the code can take is on a completely different scale. White box testing in this case would be the only way to check that the algorithm will work, and that is by proving that it works. The only way black box testing could prove correct behavior would be with an exhaustive proof.

White box testing is not necessarily as easy or convenient as black box testing because it requires understanding and analyzing the code instead of just running it and seeing what happens. In some cases, though, it is necessary to use.

From the blog CS@Worcester – klapointe blog by klapointe2 and used with permission of the author. All other rights reserved by the author.

AntiPatterns: Golden Hammer

Another day, another blog post about a concept from Sourcemaking.com, this time on the Golden Hammer AntiPattern.

Software Development is a field that requires continuous learning throughout your entire career. With the sheer amount of tools at your disposal, it can be daunting to try to expand your skillset. However, knowing how to utilize as many tools as possible is important to help make an effective and well-rounded developer. Not only that, being brave enough to drop the tried and true way of doing things for newer, more effective solutions is an important soft skill to have as a team.

We’ve all heard of the expression, “If all you have is a hammer, everything looks like a nail.”, and that’s the exact origin this AntiPattern. When a team fails to develop their knowledge and expand their skillset, they often fall back on the same process that they’ve used in the past. Sometimes, a significant financial investment has been made into training the team to use a specific product, and so they don’t want to let that methodology go to waste. That’s fine, except for the fact that there is a unique way to approach nearly every new problem that developers come across. Reusing old system design is a sure fire way to cause more problems than necessary for the development process.

There are countless issues with this approach — it’s like trying to fit a square peg in a round hole. In a way, teams will end up warping the problem in order to make their solution work, as opposed to designing a good solution for the problem. It’s self limiting too, because teams will only end up choosing problems that they feel comfortable with, relating back to approaches they have taken in the past. Not only this, but if the Golden Hammer is a particular product or tool made by another company, then the team can find themselves at the mercy of that company to continue support for that product.

Not only does this AntiPattern require a tremendous amount of refactoring (usually, a complete reworking of the product is necessary), it requires a change in the philosophy of the developer(s). Each and every member of the team needs to be focused on expanding their knowledge of shifting trends in technology, and there must be an emphasis on the exploration of new tools in the workplace with some leeway in deadlines as a result.

From the blog CS@Worcester – James Blash by jwblash and used with permission of the author. All other rights reserved by the author.

AntiPatterns: Spaghetti Code

Many AntiPatterns seem to arise when code structure is ignored and foundational concepts like those of OOPs are forgotten. Spaghetti Code is perhaps the most common and widespread of these, since it can come about so easily in nearly any program regardless of the language or paradigm. In a general sense, it is unstructured code that is difficult to read, edit, or use.

Spaghetti Code involves a very cluttered, messy design (or lack thereof). There are many moving parts which seem disorganized and generally readability is extremely, low even for the original developer. It’s much more easily found in procedural paradigms as Object Oriented paradigms were made partially to avoid this problem in particular. If Spaghetti code is found in an Object Oriented system, the structure often reflects that of a procedural program.

I’ve written about The God Class before, which is a concept that stems from similar errors, except in a slightly different form. Spaghetti Code differs from this in that the God Class has one central core class that makes extending and modifying anything more of a challenge than is necessary. Spaghetti Code generally has several large classes which run in a single, multistage process. Sometimes this kind of messy code can be found within other AntiPatterns, such as Lava Flow. Dead ends of routes taken with old projects can result in a jumbled mess of irrelevant code that is hanging around for no reason other than it is difficult to read.

So what can we do about it? Well, like many other AntiPatterns, the best solution is to take a preventative approach. Make sure that a clearly defined structure for the program is outlined before starting to develop. Make sure that when modifying the existing code, the developer adheres to the structure in place. If you’ve found yourself in a position where you need to refactor Spaghetti Code, it could potentially be a case of diminishing returns where trying to “fix” the program is more difficult than rebuilding it from the ground up. This is a massive waste of development time and money, since if a clear goal were in place to begin with the problem could have been avoided all together.

At the end of the day, a clear vision and game plan for the software you’re writing is the most important piece of development. Without a path to follow, problems become difficult to solve and solutions become difficult to implement — especially when developing a large scale software system.

From the blog CS@Worcester – James Blash by jwblash and used with permission of the author. All other rights reserved by the author.