Author Archives: klapointe2

Software Testing Principles

I am writing in response to the blog post at https://www.guru99.com/software-testing-seven-principles.html titled “7 Software Testing Principles: Learn with Examples”.

This blog post highlights some useful general guidelines for software testing. In order to write effective test cases, it is important to follow some logical approach toward determining what to test for and how to test it, and this is a guide that describes such a set of principles to follow that are well suited to capture the logic involved in software testing.

The first principle is that exhaustive testing is not possible. I am not sure I believe this. In general it is useful to assume there exists no perfect test, but for simple enough applications where the number of possible interactions are enumerable, I would think that it would be possible to achieve exhaustive testing, much like an exhaustive proof, where every possible path is covered and verified. Maybe I am missing the implausible event that the test is correct but the computer running the tests is corrupted in such a way that certain tests are not run. This is relevant to a point made in this area, which is risk assessment.

The second and third principles make similar points. Always running the same tests will eventually not cover certain issues. If all of the same methods for testing are always applied exactly the same, then eventually there will be some scenario which the particular method is not suited for, and it will miss something. This leads into the later principles: the absence of a failure is not proof of success, and context is important. Developing tests suited for the particular application is necessary to ensure the correlation between tests passing and the program functioning correctly, and just because every test passed does not mean the program is going to work perfectly.

This set of software testing principles can be summarized in a few basic points. Develop test cases that are well suited specifically for the application that is being tested, consider the risk of certain operations causing a failure, and do not assume that everything works perfectly just because every test case passed.

From the blog CS@Worcester – klapointe blog by klapointe2 and used with permission of the author. All other rights reserved by the author.

Software Engineer Qualification

I am writing in response to the blog post at https://www.shiftedup.com/2015/05/07/five-programming-problems-every-software-engineer-should-be-able-to-solve-in-less-than-1-hour titled “Five programming problems every Software Engineer should be able to solve in less than 1 hour”.

This blog post shares a story about a history of people who apply for the position of a software engineer and claim some loosely related skills without actually having any chance of understanding or performing the job. The frustration of the author is expressed, and the author lists five programming tasks to disqualify any supposed “software developer” who would not be able to complete them in under an hour. I attempted them myself and they only took five minutes.

I am not sure what motivation people have to apply for a job that they are in no way capable of performing, but the author of this blog post seems to be fed up with how common it is. Supposedly, though, people who do not know what programming is are attempting to become software engineers.

I think that the list of five programming problems and the time constraint of one hour is a generous filter to sort out all of the people who have never written a program in any language ever before. It certainly would not be enough to qualify for the position of a software engineer, but that is not what the problems are meant to indicate upon fulfillment, it is simply what they are meant to reject upon failure. Somebody who claims to be a “developer” and fails to accomplish these simple tasks should revisit their resume.

The problems themselves are very basic. Find the sum of some numbers using loops or recursion, combine elements in two arrays, calculate Fibonacci numbers, and the last two problems are more peculiar but still simple demonstrations of basic problem solving. It should be evident in much less than an hour whether a person is capable of solving them, and any experienced software engineer should only need ten minutes.

The blog post acknowledges some feedback about the last two problems that are a bit less conventional than the others, but I think that the ability to solve unconventional problems is important, and I think anyone who writes code in something besides a markup language or an object notation could solve them.

From the blog cs-wsu – klapointe blog by klapointe2 and used with permission of the author. All other rights reserved by the author.

Procedural and Object Oriented Programming

I am writing in response to the blog post at https://www.codementor.io/learn-programming/comparing-programming-paradigms-procedural-programming-vs-object-oriented-programming titled “Comparing Programming Paradigms: Procedural Programming vs Object-Oriented Programming”.

Object oriented programming seems to be the focus of all that is ever taught in a computer science course after the basics of syntax and control structures are covered, which are the basis for procedural programming. The shift into object oriented programming seems to mostly be for the sake of establishing proper design principles such as encapsulation and normalization to reduce redundancy, but these are not mutually exclusive features of the object oriented programming paradigm; it is still entirely feasible to write procedural code that is still “good” code.

The blog post does not directly define what procedural programming is about, but it alludes to the writing of straightforward code that makes use of variables, scope, functions and control loops. Then comes the brief anecdote of writing thousand-line long programs that start to become difficult to maintain, and how the object oriented programming paradigm is the solution. Object oriented design is definitely helpful for improving the scalability of a large program by introducing better organizational practices to the code structure, but the principles of encapsulation and modularity can be applied directly to the poorly maintained program anyway without changing paradigms. This is not to say that object oriented programming is bad or unnecessary, but the point is that procedural programming is not bad and does not need object oriented programming to “fix” it. Procedural code happens to be the most common poorly written code because it is most commonly used by beginners, who learn about better coding practices once introduced to object oriented programming.

Some of the faults with object oriented programming are described in the blog post, adding that it is not the best idea to avoid the use of procedural programming for the sake of adopting the exclusive use of object oriented programming. Modularity regarding class extensions and modification of a class can make things difficult in languages that focus on object oriented programming, where overriding a method or re-implementing a class may have adverse effects on subclasses. It is ultimately decided that multi-paradigm programming is a good choice, where the benefits of procedural and object oriented programming can be combined.

From the blog CS@Worcester – klapointe blog by klapointe2 and used with permission of the author. All other rights reserved by the author.

Positive and Negative Testing

I am writing in response to the blog post at https://www.guru99.com/positive-vs-negative-testing.html titled “Positive Vs Negative testing”.

Positive and negative testing have some features in common with the details of boundary value testing that we covered in class. Robust boundary value testing, in particular, tests values not only on the boundaries but also inside and outside of the boundaries. Testing input values that are inside the boundaries is called positive testing, and testing input values that are outside of the boundaries is called negative testing.

The blog post confirms as much. Positive testing is about providing valid inputs and testing that the application behaves as expected. Negative testing is about providing invalid inputs and testing that the application does not do anything that it is not expected to do. Boundary value analysis and equivalence partitioning are listed as techniques for positive and negative testing. Testing input data that is chosen within the boundary qualifies as positive testing, and testing input data that is chosen outside the boundary is negative testing. Equivalence class partitioning has partitions that are also valid or invalid based on whether they are inside the boundary. Testing valid partitions is positive testing, and testing invalid inputs is negative testing.

“Positive testing” and “negative testing” are separate types of testing by themselves, but it does not necessarily make sense to only use one of them; they do not provide a complete analysis of the program’s behavior. Positive testing will yield no conclusions about the behavior of the program given invalid inputs, and negative testing does not verify that the program behaves as it is supposed to when it is given valid inputs. Boundary value testing and equivalence class testing are specific methods that use positive and negative testing, and the process of testing inputs directly falls under black box testing, which does not make the distinction between whether an input is categorized as valid or not, only whether the program behaves correctly. Doing both positive testing and negative testing will provide sufficient information about the behavior of the program to be confident about whether the program is going to behave correctly.

From the blog CS@Worcester – klapointe blog by klapointe2 and used with permission of the author. All other rights reserved by the author.

Object Oriented Programming

I am writing in response to the blog post at https://blog.codinghorror.com/your-code-oop-or-poo/ titled “Your Code: OOP or POO?”

Most of the code I have ever written has been founded in the imperative programming paradigm. I began pulling concepts from object oriented programming in the last few years to help organize large projects and keep the structure of things more adaptable. It is a useful way to develop a large framework, but relying on objects for absolutely everything seems like it would tend to cause a lot more problems than it would ever be meant to solve.

This blog post refers to “POO” as “Programming fOr Others”, as opposed to plain Object Oriented Programming (OOP). One of the purposes of object oriented programming, and one of the reasons it is such a dogmatically adhered-to convention, is that it makes it a lot easier for people who did not write the code to understand what it is doing and work with it themselves. The blog post discusses on the topic that object oriented programming can be over-used, and it is not the only solution to writing code that is readable and easy to be understood by others.

There was an excerpt about how there was a type of programmer who would only write five or ten lines of code, preceded by twenty lines of comments, and object oriented programming basically allows the two to be combined. Instead of concise code accompanied by an explanation, there is a large file filled with descriptive language embedded into the structure of the code itself.

Another excerpt mentioned the use of object oriented programming for trivial tasks. I do not think that it makes any sense to go through the effort of supporting scalability and maintainability for something that can be started and completed and discarded so easily without the extra work. A simple command line in an imperative programming language would be far more practical for basic tasks.

Part of the argument made is that a programmer should focus on the principles of object oriented programming, rather than the name. Encapsulation, simplicity, code re-use and maintainability are the main ideas, not just objects.

From the blog CS@Worcester – klapointe blog by klapointe2 and used with permission of the author. All other rights reserved by the author.

Grey Box Testing

I am writing in response to the blog post at https://www.guru99.com/grey-box-testing.html titled “Grey Box Testing: Process, Techniques, Strategy, Challenges”.

Grey box testing, as the name suggests, is meant to be a combination of black box testing and white box testing. In white box testing, the internal structure of the code is known, and in black box testing, the structure of the code is unknown, and in grey box testing, it is partially known. Grey box testing combines the benefits of black box testing and white box testing, combining the input of developers while testing from the perspective of a user.

An example of grey box testing is given where a tester is testing a website and something is wrong with the HTML code, so the tester is able to look at and make changes to the code themselves to continue testing. A few other types of testing are listed as qualifying as being in the category of grey box testing: matrix testing, regression testing, orthogonal array testing, and pattern testing. These methods follow the strategy of black box testing while integrating some white box testing practices to focus more on what the test cases should be looking for.

A sequence of steps is provided for how to perform grey box testing. The first two steps, identifying inputs and outputs, do not seem to be directly related to black box testing or white box testing alone. In black box testing, there would be no need to explicitly identify the inputs and outputs; they would already be given in the test specification. The creation of the specification does require access to the code, which relates to white box testing, so grey box testing in this sense seems like a real-time testing approach where there is not necessarily a pre-developed specification and learning about the environment is a part of the testing process.

Challenges in grey box testing that are listed include when a test executes and the result is incorrect, and when something being tested encounters some failure. These do not really sound like challenges, they seem to just be examples of what testing is for.

From the blog cs-wsu – klapointe blog by klapointe2 and used with permission of the author. All other rights reserved by the author.

Premature optimization

I am writing in response to the blog post at https://9clouds.com/blog/premature-optimization-the-root-of-all-evil/ titled “Premature Optimization: The Root of All Evil”.

Premature optimization is about focusing on making sure the code will be able to run as fast as possible before anything actually works yet. It is time consuming and, by definition, “premature”, so it is not a good thing to do. The blog post quotes Donald Knuth who said “Premature optimization is the root of all evil.” For sizable projects, premature optimization is practically procrastination. It is about making sure that the program works well in theory before making sure that it works at all. The blost post seems to be referring to premature optimization on the scale of businesses, where I am only considering the effect on individual programs and projects, but the effects are the same. It makes more sense to finish something that works and then improve it than it does to improve it first and then finish it.

I have done a lot of premature optimization on small projects, and it certainly does take up a lot of time. I tend to do it just because it is interesting, though, to try to come up with such an implementation that performs optimally. It is a secondary challenge that puts off the original task. If it becomes tedious or starts to seem wasteful then I just write something that works and move on to the next part, and if it matters or makes a difference, and it never does, I can just look into how to make things faster again. The idea of actually producing something in a professional sense and becoming caught up in the performance details on a small scale before the rest of the product is finished definitely seems like it would be a waste of time. Premature optimization requires making predictions of how things are going to work after they are made, and devising a plan to make things fit together smoothly before the things even exist yet. It makes much more sense to make the things first, and then make them work together better afterwards.

From the blog cs-wsu – klapointe blog by klapointe2 and used with permission of the author. All other rights reserved by the author.

Black/White box testing

I am writing in response to the blog post at https://www.guru99.com/back-box-vs-white-box-testing.html titled “Black Box Testing Vs. White Box Testing: Key Differences”.

Black box testing and white box testing are topics that we have covered in the CS 443 software quality assurance and testing class. Black box testing involves testing from the perspective of a user, who does not have access to the code, and thus the testing is done without referring to the code. Instead, the behavior is tested directly by checking inputs against expected outputs. White box testing involves code coverage and testing different branches and paths of code based on the code itself and not a pre-defined specification for the behavior to meet.

The blog post provides a table of comparisons between black box testing and white box testing. Most of the points are that black box testing is done without access to the code and white box testing focuses on the code. One of the points made is that black box testing is not good for testing algorithms. This makes a lot of sense, and for certain algorithms, black box testing would be impossible to do. Black box testing requires some specification for the behavior of the program to meet, and that specification is supposed to cover everything that might go wrong with the program. This is fine when the program is made up of a set of conditional branches, but something more intricate like a computational geometry or machine learning algorithm would be impossible to test every case for because the number of different paths the code can take is on a completely different scale. White box testing in this case would be the only way to check that the algorithm will work, and that is by proving that it works. The only way black box testing could prove correct behavior would be with an exhaustive proof.

White box testing is not necessarily as easy or convenient as black box testing because it requires understanding and analyzing the code instead of just running it and seeing what happens. In some cases, though, it is necessary to use.

From the blog CS@Worcester – klapointe blog by klapointe2 and used with permission of the author. All other rights reserved by the author.

Data vs Information

I am writing in response to the blog post at https://www.guru99.com/difference-information-data.html titled “Difference between Information and Data”. This is a topic that was covered in my Database Design and Applications class, but it is still a useful distinction to be familiar with for the purposes of software testing.

The blog post firstly individually defines data and information. Data is defined as a raw, unorganized collection of numbers, symbols, images, etc. that is to be processed and has no inherent meaning in and of itself. Information, on the other hand, is defined as data that has been processed, organized, and given meaning. My general understanding of the difference between information and data is that data is information without context, and that seems to be consistent with what how the blog post is explaining it.

There is a long table of contrasting descriptions of different attributes of data and information listed on the blog post; data has no specific purpose, it is in a raw format, it is not directly useful and has no significance, whereas information is purposeful, dependent on data, organized and significant. One particular property is labeled “knowledge level”, where data is described as “low level knowledge” and information is said to be “the second level of knowledge.” I have never considered “levels” of knowledge before, this seems to suggest that there are multiple additional categories. It later mentions “knowledge” and “wisdom” as additional categories. DIKW (Data Information Knowledge Wisdom) is explained, which is something I have never heard of before. It is a model used for discussing these categories and the relationships between them. An example it provides lists an example value for data as “100”, information as “100 miles”, knowledge as “100 miles is a far distance”, and wisdom as “It is difficult to walk 100 miles by any person, but vehicle transport is okay.” These additional levels of knowledge seem to further process and contextualize the information. I think that it would have been interesting if it expanded further on these topics and how things like knowledge and wisdom are significant independently from data and information, and whether further levels of knowledge exist even beyond that.

From the blog CS@Worcester – klapointe blog by klapointe2 and used with permission of the author. All other rights reserved by the author.

JavaScript vs Typescript

I am writing in response to the blog post at https://www.guru99.com/typescript-vs-javascript.html titled “Typescript vs JavaScript: What’s the Difference?”. This is particularly relevant to our CS 343 Software Construction, Design and Architecture class because our final projects are written using Typescript.

The blog post starts out by describing what JavaScript and Typescript are. JavaScript is described as a scripting language meant for front end web development for interactive web pages, and it states that it is not meant for large applications – only for applications with a few hundred lines of code. I think the google home page source code would like to disagree with that, with its thousands of lines of condensed minified JS code running behind its seemingly plain surface, but given the speed of JavaScript in relation to faster languages, it makes sense that it was never actually intended to be used for large applications. The blog post moves on to explain what Typescript is about. Typescript is a JavaScript development language that is compiled to JavaScript code and provides optional typing and type safety.

A list of reasons why to use JavaScript and why to use Typescript are provided, but they are not in opposition; the reasons to use JavaScript are not reasons to not use Typescript, they are really just descriptions of JavaScript. JavaScript is a useful language, and Typescript is a useful extension of JavaScript. After a history of the languages are given, a table of comparisons is provided. Typescript has types and interfaces, JavaScript does not. Typescript supposedly has a steeper learning curve, but given that plain JavaScript syntax will work when writing Typescript, the learning curve does not seem steeper necessarily, only longer, given the additional functionality that is offered by Typescript. Similarly, Typescript not having a community of developers as large as JavaScript’s seems to not be significant given that a programmer writing in Typescript may gain just as much utility from the community of JavaScript developers as the community of Typescript developers, given how similar the languages are. An interesting factoid at the end is that Typescript developers have a higher average salary than JavaScript developers, by about a third.

From the blog cs-wsu – klapointe blog by klapointe2 and used with permission of the author. All other rights reserved by the author.