Documentation is important

Self-documenting code is one of the biggest myth in the software development industry. It is common for someone to think that their code is pretty much self-explanatory, either because it is so simple or they are the author so they should have known it better than anybody else. However, not all of us can realize the fact that while the code is pretty transparent for ourselves, other would not have the aspect as we do despite how much experienced they are in the field. In this blog post, I would like to reason that code comments have a lot of value and documentation has more value than just explaining how functions / software works.

From that all of us can understand, code documentation is basically a collection of descriptions to explain what the codebase does and how it can be used. It can be a simple explanatory comment that locates above a function or a block of code, or a full fledged developer handbook, usually complete with prescriptive style dos and don’ts overviews of each part of the application, and approaches to the most common types of coding task. Like a lot of people would think, the general reasons why we think our code is “self-documenting” mainly because our applications can be boiled down into functions or objects that have one specific task and that task should be represented by the name of the functions or the objects and for me, this is not true at all, especially with large project with lengthy codes and potentially has functions that is named a little bit similar and have some common traits with each other. What we mostly don’t really think about is that code without comments is basically force the reader to go through the how it is implemented to figure out the why. With that being said, we can see that documentation are important to transfer the knowledge of what that function really do so that people can reuse it by just not knowing how it was implemented and that is fine. Not just giving user the instruction, code comments are also used to provide example usage of the code, provide the trade offs or pros and cons of the current implementation, marking the possible improvements in the code, possibly how to use it effectively with others APIs, and it serves as the most effective communication between authors and readers.

Another flaw of self-documenting code mindset that we usually think is that because we are standing in the developer-only point of view. To be clearer, what I meant is that we only seeing the value of documentation as allowing people to understand how the code works and documentation should be for every possible user. Developers only think about developers when it comes to software. It is true that a vast majority of the software users are end users and by having a good documentation can make the software more approachable by users that are not developers.

Article reference can be found here.

From the blog #Khoa'sCSBlog by and used with permission of the author. All other rights reserved by the author.

Early Testing

https://www.softwaretestinghelp.com/early-testing/

 

This week I read the post from the site linked above. It talks about effective quality assurance strategy in the early stages of a project. Uncaught bugs or fundamental flaws in the foundations of the software life cycle can result in setbacks for the later stages. Ultimately, it is more efficient to test early and test often than to spend resources going back to fix an early issue. If this is the case, sometimes the best solution is to rewrite many built features of a software project just to change the foundation, which is the worst case scenario in a time crunch environment.

There is a balance to be achieved between spending enough time testing early as to not create fundamental mistakes, and spending too much time testing without devoting enough resources to producing a working prototype. Software projects are typically planned on a time based release quarterly, half-yearly, or yearly, depending on the size of the project and the goals in mind. Because time is of the essence, defects need to be organized based on severity, time allocation, and expected collateral impact for the rest of the code.

This cycle can be broken down into steps; the developer creates, the tester tests the creation and ranks severity, the developer responds by fixing the most important issues, and the tester evaluates the fixes. Ultimately, this cycle never ends as long as they are employed. Constantly, there are new features and new bugs, and it is impossible to discover every bug from every new feature, the product is never defect free. In this way, testing is especially important in the early stages of development since it plays better into the long term which we expect to never end.

Now that I’ve summarized why and how early testing occurs, the final segment of the site reviews the primary targets for early testing. To begin, stakeholders determine which features will be the most effective by generating the most revenue, complying with standards, catching up to a competitor, or succeeding a competitor. These selected new features are the focus of early testing since they usually involve many lines of code with a high possibility of intersection. QA and development leaders both need to work together when working on these high priority features in the early stages of development. This collaboration must be stressed as especially important to the work efficiency of the entire project.

From the blog CS@Worcester – CS Mikes Way by CSmikesway and used with permission of the author. All other rights reserved by the author.

Software Testing Principles

I am writing in response to the blog post at https://www.guru99.com/software-testing-seven-principles.html titled “7 Software Testing Principles: Learn with Examples”.

This blog post highlights some useful general guidelines for software testing. In order to write effective test cases, it is important to follow some logical approach toward determining what to test for and how to test it, and this is a guide that describes such a set of principles to follow that are well suited to capture the logic involved in software testing.

The first principle is that exhaustive testing is not possible. I am not sure I believe this. In general it is useful to assume there exists no perfect test, but for simple enough applications where the number of possible interactions are enumerable, I would think that it would be possible to achieve exhaustive testing, much like an exhaustive proof, where every possible path is covered and verified. Maybe I am missing the implausible event that the test is correct but the computer running the tests is corrupted in such a way that certain tests are not run. This is relevant to a point made in this area, which is risk assessment.

The second and third principles make similar points. Always running the same tests will eventually not cover certain issues. If all of the same methods for testing are always applied exactly the same, then eventually there will be some scenario which the particular method is not suited for, and it will miss something. This leads into the later principles: the absence of a failure is not proof of success, and context is important. Developing tests suited for the particular application is necessary to ensure the correlation between tests passing and the program functioning correctly, and just because every test passed does not mean the program is going to work perfectly.

This set of software testing principles can be summarized in a few basic points. Develop test cases that are well suited specifically for the application that is being tested, consider the risk of certain operations causing a failure, and do not assume that everything works perfectly just because every test case passed.

From the blog CS@Worcester – klapointe blog by klapointe2 and used with permission of the author. All other rights reserved by the author.

Mocking & Mockito

For the final blog entry for this class, I decided to cover a topic that I’ve had personal experience with but wanted to review for further review and understanding, mocking and specifically, mockito. I found an article on toptal.com by Ivan Pavlov about mockito that is a guide for everyday use of the tool, exactly what I was looking for! The article has sections that includes some in depth analysis that could likely be used as a more in depth documentation for several aspects of mockito. The article also includes some insight into what mocks and spies are in the context of testing, unit testing specifically as well as some examples of the mockito tool in action.

For unit tests, we are testing the smallest unit of code possible for validity. The article states that the dependencies are not actually needed because we can create empty implementations of an interface for specific scenarios known as stubs. A stub is another kind of test double, along with mocks and spies, which are the two test doubles relevant to mockito. According to the article “mocking for unit testing is when you create an object that implements the behavior of areal subsystem in controlled ways.” They basically serve as a replacement for a dependency in a unit test. Mockito is used to create mocks, where you can tell mockito what you want to do when a method is called on a mocked object.

A spy for unit testing is another test double used by mockito. A spy, unlike a mock, requires a non-mocked instance to “spy on”. The spy “delegates all method calls to the real object and records what method was called and with what parameters”. It works as you would expect a spy to, it spies on the instance.  In general, mocks are more useful than the spy as they require no real implementation of the dependency as the spy does. The spy is also an indicator that a class is doing too much, “thus violating the single responsibility principle” of clean code.

Mocking with mockito can be used in a situation where you want to test that an object is being assigned a specific value when a method is called on it and certain parameters are met. A mock does not require the implementation of the methods to be written yet as you can assign values when a method is called by the power of mockito. This is why mockito is such a popular testing tool and mocking is such a popular testing strategy, and why I’ll continue to utilize it whenever relevant.

Link to original article: https://www.toptal.com/java/a-guide-to-everyday-mockito

 

From the blog CS@Worcester – The Road to Software Engineering by Stephen Burke and used with permission of the author. All other rights reserved by the author.

Boundary Value Testing

For this week’s blog post, I wanted to cover a topic that we covered earlier this semester, boundary value testing. I found an article on seemingly my go to site for articles for this blog, guru99. The article has several sections including a brief definition on what boundary testing is, as well as a definition of what equivalent class partitioning is and the relationship between the two. There are also a couple of examples and an analysis section that explains why we should use equivalence and boundary testing.

The article describes boundary testing as “the process of testing between extreme ends of boundaries between partitions of the input values.” These extreme values include the lower and upper bounds, or the minimum and maximum acceptable values for a given variable. For boundary testing, you need more than just the minimum and maximum values though. For the actual testing, you need a value that is just above the minimum, a nominal value that lies somewhere in the middle of the range, and a value that is just below the maximum value.

Equivalent class partitioning is explained by the article as “a black box technique (code is not visible to tester) which can be applied to all levels of testing.” It also states that when using equivalent class partitioning, “you divide the set of test condition into a partition that can be considered the same”.  This may sound a bit confusing but it is actually pretty simple. The article uses an example of a ticket system where values 1-10 are the only acceptable values and values 11-99 are invalid. You could break up this set of numbers into two different equivalent class partitions and then returning back to our boundary testing concept, we would test values like 0, 1, 5, 9, and 10.

The best part about boundary testing is that because it is a form of black box testing, you do not need to see how the actual code works. You only need to know the specifications for the valid and invalid numbers. Another reason to use boundary testing is that it eliminates the need for testing every value and thus, minimizes the tests needed. I have and will continue to utilize this testing strategy throughout my software engineering career.

Link to original article: https://www.guru99.com/equivalence-partitioning-boundary-value-analysis.html

 

From the blog CS@Worcester – The Road to Software Engineering by Stephen Burke and used with permission of the author. All other rights reserved by the author.

Uh Oh! Spaghetti Code!

Hello, once again my fellow readers! I am sorry to report that this will be my last blog post for this semester and in turn this class. Continuing with my trend of Antipatterns today I will talk about a delicious Antipattern, Spaghetti Code.

SPaghetti code is the Antipattern that everyone first falls into when learning how to code a new language, learning a new coding tool, or learning how to code in general. Spaghetti code is the Antipattern representing code that has very little software structure. As a result, this leaves the code with a lack of clarity or direction, even to the original developer. This is the classic moment where you uncover code from some time ago, sit down, look at it, and go, “What was I even trying to do here?”

It can be quite easy to identify Spaghetti Code. Simply look for methods being very process oriented. Object implementation will also dictate flox execution. You will see minimal relationships between objects. You will see a very predictable pattern of object use.

Spaghetti Code can result in many consequences. Spaghetti code results in a program with diminishing returns. If Spaghetti code is mined, only part of the code will even be suitable for reusable if any of the code is reusable at all. As well, maintaining the code will result in being much more wasteful and a diminishing return as well. It will be more practical and less wasteful if a new solution is developed. Spaghetti code is so detrimental, it can even remove the benefits of object-oriented design.

To fix Spaghetti Code, you can refactor your code. Refactoring code is a natural and excellent way to maintain your code and is a wonderful way to increase performance. Code refactoring first must achieve a sufficient structure. Then, performance critical code must be identified and then structure compromises must be implemented as to enhance performance. Of course, the best way to get rid of Spaghetti Code is to prevent it in the first place.

As I was reading this, I knew that this is something that I could apply this almost immediately. In my next semester, I will have to work in a team and build off of an existing program. This Antipattern will stay with me and I definitely will see how I can work refactoring into the process to ensure that no Spaghetti code will remain. I have a sneaking suspicion that refactoring will be an idea that if implemented, will be able to squash many different Antipatterns that show up or are already lurking.

Well folks, this going to be my last blog entry for this semester. Thanks for reading and learning along with me (or watching me read). Until next time, have a wonderful day!

 

From the blog CS@Worcester – Computer Science Discovery at WSU by mesitecsblog and used with permission of the author. All other rights reserved by the author.

Software Engineer Qualification

I am writing in response to the blog post at https://www.shiftedup.com/2015/05/07/five-programming-problems-every-software-engineer-should-be-able-to-solve-in-less-than-1-hour titled “Five programming problems every Software Engineer should be able to solve in less than 1 hour”.

This blog post shares a story about a history of people who apply for the position of a software engineer and claim some loosely related skills without actually having any chance of understanding or performing the job. The frustration of the author is expressed, and the author lists five programming tasks to disqualify any supposed “software developer” who would not be able to complete them in under an hour. I attempted them myself and they only took five minutes.

I am not sure what motivation people have to apply for a job that they are in no way capable of performing, but the author of this blog post seems to be fed up with how common it is. Supposedly, though, people who do not know what programming is are attempting to become software engineers.

I think that the list of five programming problems and the time constraint of one hour is a generous filter to sort out all of the people who have never written a program in any language ever before. It certainly would not be enough to qualify for the position of a software engineer, but that is not what the problems are meant to indicate upon fulfillment, it is simply what they are meant to reject upon failure. Somebody who claims to be a “developer” and fails to accomplish these simple tasks should revisit their resume.

The problems themselves are very basic. Find the sum of some numbers using loops or recursion, combine elements in two arrays, calculate Fibonacci numbers, and the last two problems are more peculiar but still simple demonstrations of basic problem solving. It should be evident in much less than an hour whether a person is capable of solving them, and any experienced software engineer should only need ten minutes.

The blog post acknowledges some feedback about the last two problems that are a bit less conventional than the others, but I think that the ability to solve unconventional problems is important, and I think anyone who writes code in something besides a markup language or an object notation could solve them.

From the blog cs-wsu – klapointe blog by klapointe2 and used with permission of the author. All other rights reserved by the author.

Test Plan

In this blog, we are talking about test plan. Test plan is one such important testing deliverable offered during the release of the product. A software product, once developed and tested completely, is prepared for its release, during which various documents, reports, screenshots, etc. are also delivered to the client and other stakeholders of the project. Known as deliverables, these documents and reports are an integral part of software development life cycle (SDLC), in my last blog, as they necessary information related to the product to the concerned individual.

Test plan encompasses all the activities performed during the testing process. A test plan document offers all the necessary and relevant information to the developers, business managers, as well as the customers. There are different types of test plan:

  • Level specific test plans: These include Unit test plan, Integration test plan and system test plan.
  • Type specific test plan: these include plans for major parameters like performance testing plan.
  • Master test plan: This is one single big plan combining all the other plans to be carried out on the software product.

Test Plan Template, the testing team or the test management team ensures that it follows a set template, which allows them to log all the necessary details about the testing process in the document. There are fixed set of parameters such as test items, testing approach, pass/fail criteria, approvals … there are all defined by the standard IEEE 829.

Creating a test plan by following guidelines, to make sure to create test plan accurately. It is important for them to consider few guidelines, which can assist them in recording all the necessary information in the document with precision. Create a concise test plan, with all the necessary information. The information provided should not be redundant and superfluous. While preparing a test plan it is necessary for the team members to be specific and precise. Create points, list, and tables wherever necessary, to increase the readability of the document. Review the document constantly before it is released with the product. Update the test plan with all the recent changes and modifications.
We know to make sure the test is in good quality, we need to know what we are looking for in testing. A testing plan would help us, a well written test plan ensures that all the aspects of the software are covered and tested. This combine with a check list would make further ensure the accuracy of out test.

From the blog CS@Worcester – Nhat's Blog by Nhat Truong Le and used with permission of the author. All other rights reserved by the author.

Crunch-a-Time me Captain (I just took a class on Design Patterns and Just Read That They are Wrong)

If that title scared you, I am sorry. This is the second post I am writing in a span of a couple hours and decided that this topic would prove to be a very interesting one to cover. While the post Rethinking Design Patterns by Jeff Atwood very openly claims that design patterns often cause unnecessary complexity and have turned much of today’s programming army into a mindless army of Gang of Four missionaries I believe it is a very delicate balance of both sides of the argument that is the solution.

In the post Atwood lays out the definitions (yes definitions!) of design pattern in order to get rid of any confusion about what he is referring to. There are two definitions because he is talking about the patterns created by the gang of four in order to solve typical solutions and also the general idea of designing a template for a solution one is faced with based on the circumstances. He also writes about the book that directly inspired the gang of four bible. This book is called “A pattern Language”, and is a direct inspiration for the design pattern book that succeeded it. This book outlines general ideas for solving problems rather than giving the reader templates to solve general problems. While the difference in these may be subtle, the latter cause anyone implementing an idea to have less of an understanding of the implementation and whether the chosen pattern is the best way to solve the problem at hand.

I feel as though the teachings I have received about design patterns has greatly influenced my understanding of coming up with solutions to problems. Before taking this class almost every program leading up to it was extremely complicated while solving relatively simple problems. Many of my programs were one trick ponies that if implementation had to be added would be in need of an almost total rewrite. After learning about design patterns I have the proper tools to increase how modular my programs are which is a huge step in the right direction. I feel like there are many cases where the gang of four design patterns are applicable and can be very efficient but for problems that don’t require a pattern it is important that we as programmers can recognize this and implement a simpler design to avoid useless complexity.

I will definitely be reading A Pattern Language soon because it seems like many of the ideas laid out in that book are extremely helpful in situations where I am tasked to program something that I can’t think of a design for. Thank you for coming to my Ted Talk.

From the blog CS@Worcester – Dummies for Programming by John Pacheco and used with permission of the author. All other rights reserved by the author.

Hashmaps Blog 2

Today I read an article on baeldung.com about HashMaps. I have used HashMaps before in Data Structures as well as Unix systems and have found them to be a very resourceful way to store and retrieve data. Most developers know about HashMaps but don’t completely understand how they work, in todays blog we will be cover HashMaps in JAVA.

HashMaps have an inner class called an Entry Class which holds the key, values and will return a value at the end. Two important main methods in the class are put() and get(). put() associates the specified value with the specified key in the map, it checks if the key given is null or not. If the given key is null, it will be stored in the zero position. The next internal part of the put method is that it fits the values inside of the limits of the array. The get() method is very similar to the put method but instead of storing, it returns a value. Get() gets the hashcode of the main object and finds the location of it in the array. If the right value is discovered, then it returns the value but if it cannot find it then it returns null. Put() and get() are two major internal parts of HashMaps and looking back at some of my old projects with HashMaps I now fully understand what is going on internally when I execute the program. I will use HashMaps more frequently in my projects now and I hope that after reading this you will be able to understand HashMaps better and will be able to give them a shot.

https://www.baeldung.com/java-hashmap

From the blog CS@Worcester – Dcanton Blog by dcantonblog and used with permission of the author. All other rights reserved by the author.