Nailing Code Reviews

The article How to Conduct Effective Code Reviews by Billie Cleek covers code reviews, when to use them, and what your objectives and goals should be when working on or submitting a code review. He discuses the different roles you can take in a code review (which are almost analogous to our roles in two of my classes this semester) and what you should expect to do while in those roles in the process of a code review.

A code review is basically a conversation between developers on a proposed set of changes to a project. It can be a discussion about why a certain part of the code is the way it is, whether or not something is effective, or if certain changes need to be made and how to go about that. Code review boils down to having a constructive conversation regarding the development of your project, and what changes might need to be made.

I personally have had a lot of trouble communicating difficulties and voicing my opinion in past classes. It is hard to find your voice and be confident, stating the issues you see and opening yourself to feedback, however through code reviews everyone who participates stands to gain knowledge from their peers as well as experience in effectively communicating to your colleagues. As long as you are able to give and receive feedback in a helpful but constructive manner, you can help clean up a project, fixing errors and making it clear and understandable for viewers to read.

In a way, I feel like my software classes this year have done a lot of work in preparing me for being effective in code reviews, as well as in the workplace in general. A lot of the important skills in code reviews are just as important in group work: effective communication, making sure questions are answered, and mutually agreeing on the decisions being made are all essential to having an effective and useful code review. Building these skills in general will make you a better team member, and help you work better in a group on big projects.

From the blog CS@Worcester – Let's Get TechNICKal by technickal4 and used with permission of the author. All other rights reserved by the author.

Version Control: A Primer

In her post A Gentle Introduction to Version Control, Julie Meloni gives a very easy to ready walk through of version control. Version control is all about maintaining the versions and revisions of your work as you are developing it. With good documentation, you can bring back old code you previously removed, or look at issues you had in your program in the past to see if that might be relevant to current issues. There are a lot of benefits to maintaining good version control.

Version control can also be useful in a classroom setting. I can recall multiple assignments this year where we used multiple commits with different labels for different assignment levels. In this way, the instructor could look at code from an earlier part of the assignment even when it had to be modified for a later part of the assignment.

Good version control also leaves you with backups if you want to revert to an earlier version of your program. Say you accidentally release an update with a major bug that slips through, you can quickly revert to an earlier version so you can fix whatever issues there are. You can use branches when you want to split off development into different directions and move the changes to main part of your program once you are satisfied, and you can use version control to help avoid any situations where there are conflicting commits.

I agree with Julie when she points out that version control has use in most business and private settings. Really, keeping good documentation of revisions of all documents can help organize your projects and keep them easily modified and reverted. For instance, if you are keeping a financial spreadsheet, but want to save it every month so you can track the differences over time, it is essential to use good version control to keep track of the revisions to the document. Or if you make modifications to a contract, but want to maintain copies of the older versions for legal reasons. Really, good version control is just part of good organization and allowing yourself to work with all the tools at your disposal. You work hard, there is no reason to throw it away.

From the blog CS@Worcester – Let's Get TechNICKal by technickal4 and used with permission of the author. All other rights reserved by the author.

Terminology – Error, Fault, Failure, Incident, Test, Test Case

Greetings reader!

In today’s blog,  I will discuss the differences between very important terminology in software testing: Error, Fault, Failure, Incident, Test, and Test Case. This blog will define each term and explain how they all correlate. Without any further introduction, let’s begin.

The differences between error, fault, failure, and incident are as follows:

An error is a human action that produces an incorrect result.  A fault is a flaw in a component or system that can cause the component or system to fail to perform its required function. A failure is a deviation of the software from its expected delivery or service. An Incident is an unplanned interruption. When the status of any activity turns from working to failed and causes the system to fail it is an incident. A problem can cause more than one incident which are to be resolved, preferably as soon as possible.

An error is something that a human does, we all make mistakes and when we do while developing software, it is known as an error. The result of an error being made is a fault.

When a system or piece of software does not perform the correct action, this is known as a failure. Failures are caused by faults in the software. Note that software system can contain faults but still never fail (this can occur if the faults are in those parts of the system that are never used). In other words, failure is the exposure of one or more faults.

A test is a process that evaluate the functions of a software application with an intent to find whether the developed software met the specified requirements or not and to identify the defects to ensure that the product is defect free in order to produce the quality product.

A test case is a set of conditions or variables under which a tester will determine whether a system under test satisfies requirements or works correctly. The process of developing test cases can also help find problems in the requirements or design of an application.

All in all, these terms are pretty elementary, however they are all important in Software testing. These terms all coincide with each other and I hop this blog was able to explain them in short detail.

 

From the blog CS@Worcester – dekeh4 by dekeh4 and used with permission of the author. All other rights reserved by the author.

The Criticisms of Design Patterns

Design Patterns have grown to become a standard in the field of Software Development ever since the Gang of Four published their book, Design Patterns: Elements of Reusable Object-Oriented Software in 1994. However, some people in the field of Computer Science have leveled criticisms towards Design Patterns for a few reasons, even if it is widely accepted that they are tools in the belts of developers at this point.

A legitimate criticism of them seems to be that many of the patterns are heavily language dependent, as some languages have easier work arounds for problems that present themselves in others. Peter Norvig, a director of research at Google, showed that 16 of 23 patterns from the Design Patterns book can be replaced solely by implementing them in Lisp or Dylan rather than C++ (page 9 of this PDF). Part of the argument here is that if languages don’t have the same design patterns, then those languages that do have more design patterns are requiring that developers find work arounds for their missing features.

Another issue that many seem bring up is that the emphasis on using design patterns results in developers relying on them too heavily. Similar to the Golden Hammer AntiPattern, once a developer (or team of developers) becomes comfortable with a tool or concept they end up attempting to cram the problems they’re given into some implementation that would allow them to use the solution they’re comfortable with. In this way, even if Design Patterns are intended with the intention of writing code in good practice, if they’re implemented when it isn’t necessary it can quickly turn sour.

The idea behind Design Patterns is that, if they are used correctly alongside correct Object Oriented Design principles, the patterns should emerge naturally. If you ever find yourself asking, “How can I use the Singleton Pattern here?”, then you’re misusing the tool in your tool belt. They are better viewed as teaching methods for successfully upholding good design in complex situations, in a way. If you’re writing code and it dawns on you that what you’re attempting to write is similar to a pre-existing design pattern, then you have a direction to follow. This article in particular gives a great example of an application of the template pattern, and shows how you’d get there in a natural way as opposed to forcing a pattern on a piece of code. 

From the blog CS@Worcester – James Blash by jwblash and used with permission of the author. All other rights reserved by the author.

The Smartest Fly (A)L(I)VE

This week as I was looking for articles, there was one particular article that caught my eye almost immediately. It is called, “Artificial fly brain can tell who’s who” I knew that I had to write about this article. This article was posted in mid October (October 18th to be exact), so it is actually fairly recent.

This article talks about how researchers at the University of Guelph and the University of Toronto have built a neural network that almost perfectly matches that of a fruit fly’s visual system, and it can even tell the difference between other flies and even re-identify them. They obtained this by combining the expertise knowledge of the biology of the common fruit fly and machine learning to produce a biologically-based algorithm.

The article then talks more about the biology of the fruit fly, and talks more about the computer program in the following paragraph. The article then concludes by talking more about the future of neural networks and AI.

I throughly enjoyed this article, and I truly believe that it was well worth the read, and I encourage others to also find the time to read this article. The part that I found the most interesting was that using this neural-network-machine-learning-based program, this “artificial fly” was able to identify other flies with a score of .75 or about 75%. They tested this by recording the fly for two whole days and then testing the program on the third day to see if it was in fact able to identify it. They also tested just the algorithm without the fly biology constraints, and this scored a .85 and .83. This is only slightly better than the program which is very good results. They also went on to compare it to human fly biologists, and they only scored a .08. Lastly, on top of all of these comparisons, they included that random chance would only score a .05. This is unbelievable in my opinion. The fact that a computer program scored that much higher than a human is truly insane. I think that this research is a huge step in the right direction for AI. After reading this article, I am much more interested in AI, and plan to continue to research the topic more.

Article URL: https://www.sciencedaily.com/releases/2018/10/181025142010.htm

From the blog CS@Worcester – My Life in Comp Sci by Tyler Rego and used with permission of the author. All other rights reserved by the author.

Blog 4 CS-343

Link To Article

In this article, the author is talking about steps to take to actually reduce software defects in programs before release through several steps. The first step seems kind of obvious for a goal but nonetheless it is to stop believing that you can’t put out a program that doesn’t have defects. Defects are nothing more than simple mistakes that could have been avoided but were not, of which these bugs could be trivial or system breaking. So instead of believing that these programs are bound to release with problems and defects, aim to write your programs without any defects. Another thing the author points out is if you are given code to test, not to simply point the finger at the developer of the code being lazy. The people writing the code are not simply trying to skimp on their end to save their time. Instead they are often overworked and simply could have missed something. It is also explained in the article that often the first thing to be done for reducing defects is to develop unit test coverage. However, although this unit test coverage could cover most of the program, it could be ineffective. Something the author points out in his article is the concept of “bug bash” which doesn’t seem like in the end it would be very productive. It is the idea where a team takes the software and tries to break it, and then repair it. While this may provide more knowledge of the software, it seems this kind of practice would be a focusing too much on the knowledge of the software instead of getting a product out to clients. Finally, some of the things that are pointed out in the article is to try to avoid tendencies that will lead to errors in the first place. These sort of tendencies could be things such as when copy and pasting from files, missing some small statement, or could be having files that may be too complex, leading to inattention errors.  

From the blog CS@Worcester – James' Blog by jdenesha and used with permission of the author. All other rights reserved by the author.

Blog 4 CS-443

Link to Article

This article is about a software tester who is trying to help those new to the field to avoid some mistakes that he has experienced in his career. Some of these mistakes and pitfalls that the author highlights are things like running out of test ideas for a project, missing simple little mistakes, self doubt about issues in the program, and the priority of what to test in a program. For each of these mistakes that an inexperienced tester can run into, the author gives a few examples of what someone can do to in such a situation. One thing the author mentions is when you may be lost on what the testing goal is to ask plenty of relevant questions to help you understand. It is better to admit you’re not clear about something in the project and ask for clarification about it, then to be ignorant of that actual goal. I agree with the author as well on his point about a testers self doubt about a bug they think they have found but are uncertain. In this situation, the author suggests a practice I personally use as well; take a break from the project and come back with a “fresh set of eyes”. This break from the testing gives you time to clear your mind and come back to the project with a new focus. One thing that I learned from this author was the concept of BCA(Brute Cause Analysis) in which two people work together, with one person brainstorming possible bugs and the other thinking about the different ways those bugs could manifest. This can be a very good idea because it is always helpful to be able to see a problem(or bug) from another person’s perspective. One final thing that the author suggests is to trust in your gut when you think you may have found a bug but are not sure. In the worst case you simply find out that it could have been something you did wrong, or in the better case your gut could have payed off.

From the blog CS@Worcester – James' Blog by jdenesha and used with permission of the author. All other rights reserved by the author.

Some simple usability test methods

https://www.infoworld.com/article/3290253/application-development/6-usability-testing-methods-that-will-improve-your-software.html

A/B Testing

A/B testing is a type of testing where there are two different application designs, generally websites, that are tested over time. Data is collected on their performance with some sort of goal in mind such as product sales. If analytics show that one of the designs was better at achieving that goal then one is declared superior and is chosen as the design to go with. This can lead to even more A/B testing against other designs until the development team comes to a decision. The analytics for the test is often times done through third-party tools rather than an in-house solution. An A/B test can be as specific as you want, to the point where you only change a single small element between the two designs or they can be made completely different. It is often best to define a specific problem keeping you from your goal that you want to investigate such as users failing to complete a transaction. A/B testing is extremely clear-cut at providing measurement data for design decisions but can take a large amount of time to conduct the tests and produce the data.

Design Prototype Testing

Design prototype testing can be used to test a complete workflow in a wireframe or fully designed portion of a product before it goes into development. A UX/UI designer will create the prototype and the test will help fix usability issues before the project goes any further. It is important to define a budget for the project as well as the specific goal. Then you need to choose a prototyping tool such as Axure. Third, you will need to choose a measuring tool to gather analytics from the user such as Loop11. It is important that the development team is familiar with such tools to make the test worth the time and work investment put into the test.

Formative Usability Testing

Formative Usability Testing is a type of early-stage testing that focuses more on quality assurance. This test should occur before the first release of the developed product so that it can become the baseline for future tests. With formative usability testing the product will go through a beta test where groups perform the defined usability tests. Test cases are usually written down in order to inform the participants through specific goals in order to get meaningful results. Afterwards it is important to analyze the feedback and make revisions to the product before the official launch. This can be repeated in order to improve the product over time.

 

From the blog CS-443 – Timothy Montague Blog by Timothy Montague and used with permission of the author. All other rights reserved by the author.

Measuring Similarity

In developing a project for my data mining class, I kept asking myself how I could objectively measure the similarity between two objects. These objects happened to be Magic: The Gathering trading cards, and I was attempting to build a card recommending system not unlike those used by YouTube or Netflix to recommend videos, TV shows, and movies. Essentially I wanted the user to be able to pick any card that exists, and have the system respond with a short list of the most similar cards. I knew the hard part was going to be determining what makes cards “similar.” The most important attributes on the cards are entirely text, and considering that there have been almost 20,000 unique cards printed since 1993, I didn’t even know where to start. All I had was a CSV file of every card ever printed with each card’s text attributes such as card name, card type (creature, sorcery, etc), and the actual effects and rules of the card.

This is where machine learning engineer Christian S. Perone’s blog Terra Incognita comes in to save the day. He explains the use of tf-idf to convert a textual representation of data (like my Magic cards) to a vector space model: an objective, algebraic model of a card’s attributes. The tf-idf of a document (trading card) is the product of two statistics, the term frequency (tf) and the inverse document frequency (idf). The term frequency measures how often a term appears on a specific card, while the inverse document frequency essentially measures how important the word is to that card; if a word is frequently used on a card but appears on most cards, it doesn’t provide us with much information. Now that we know about this tf-idf statistic, we can construct a matrix that contains a tf-idf measure (i.e. how important a word is) for every word on every card.

Now that we have a numerical measure of importance for every word on every card, we need to find cards with similar tf-idf vectors to the card selected by the user. Perone explains the popular cosine similarity method, which takes two vectors and returns a real number value between 0 and 1. We just have to find the cosine similarity between our chosen card and every other card ever printed and return the most similar cards.

Perone very clearly explains these techniques in both plain English and with precise mathematical notation. The entire process has taught me an enormous amount about processing textually represented data, and mathematically following human intuition about what makes two things “similar.” I know that the skills learned through this project are going to be invaluable to me, and they’ve certainly changed the way I think about textual data.

From the blog CS@Worcester – Adventures in Computer Science by zachstevens2808 and used with permission of the author. All other rights reserved by the author.

Somebody Touched My Spaghetti!

For this weeks blog post I will be looking at an Antipattern known as Spaghetti Code from Source Makings site. Perhaps the most famous AntiPattern it has existed in one form or another since the advent of monogramming languages. Essentially Spaghetti Code is a very cluttered or messy design approach causing it to appear almost like spaghetti, all tangled up.

Nonobject oriented languages appear to be more susceptible to this, and this is more likely to occur to those who have yet to fully master advanced concepts involving object orientation. The general form of this spaghetti code appears in systems with very little software structure. “If developed using an object-oriented language the software may include a small number of objects that contain methods with very large implementations that invoke a single, multistage process flow. “.  On top of this object methods are invoked in a very predictable manner, with a negligible degree of dynamic interaction between any of the objects involved in the system. Causing the system to be very difficult to maintain or extend, allowing no opportunity to reuse the objects and modules in other similar systems. Spaghetti Code usually results in an inexperience with object oriented design technologies, similar to this no design prior to the implementation of the actual code. Another cause would be the result of developers working in isolation because of this their maybe ineffective code reviews.

A solution to this not so delicious mess would be through software refractoring (code clean up). This being an essential part of software development, allowing most efficient clean up. When the structure becomes “compromised” through the mess its support to extensions become more and more limited to the point of useless. Ideally code cleanup should be happening throughout the entire development process but that’s an ideal situation that not everyone (including myself) follow all the time. Doing so on an hourly or daily basis is a good start to this cleanup process.

If simple code clean up is not working what next? Stopping spaghetti code through prevention is usually the best way to resolve this matter. Before you start writing the code, have a plan of what you are designing and how to structure it. Commit to actively refractoring and improving spaghetti code whenever the code needs to be modified is an extremely useful to prevent it.

Essentially if you don’t want to have spaghetti and meatball code you need to think about the overall structure and a good idea of what you are going to be developing.

 

https://sourcemaking.com/antipatterns/spaghetti-code

From the blog CS@Worcester – Matt's Blog by mattyd99 and used with permission of the author. All other rights reserved by the author.