Category Archives: Week 12

Figuring out Continuous Integration

So for this week, I have decided to read “Introduction to Continuous Integration Testing” from the TestLodge blog. The reason I have chosen to read this is because it is crucial to know the concepts of Continuous Integration and how this practice provides flexibility to development teams. It will help in understanding the workflow and why it helps developers to develop software that is cohesive even at the shortest amount of time.

This blog post goes over what is Continuous Integration, the advantages from it, and concepts before embarking Continuous Integration. Continuous Integration is an approach within software development in which the developer pushes code into a respiratory, such as Git, several times daily during the development phase. There are tools available that developers can use to set up Continuous Integration. They can be both licensed and open source and by using them, simultaneous builds can be run on multiple platforms. Once they are initiated, solutions can be built, run unit as well as functional tests, and automatically deploy to the application worked on to a server. There are some benefits such as ensuring full integrity across code and build before deployment and simple setup and configuration. These tools consist mainly of a build server and build agents. To name a few from the automatic build process, there is the Build Server, Build Configuration, Build Source path, and Build step. Depending on the requirements and size of budget available to the development team, the favoring between open source and license will go in many ways.

What I think is intriguing about this blog is that it goes out of its way to explain the automatic build parts. Usually when it comes to Continuous Integration, there would be some difficulty to have the concepts down at first before making the automatic testing. I do understand that automation does play a critical part in the process, which is why it is appreciated to have the concepts down when explaining it to others. The content of this blog has changed my way of thinking with this practice.

Based on this content of this blog, I would say this blog is a great read to understand the general ideas of Continuous Integration. I do not disagree with this content of this blog since it gives an understanding of the goals in continuous testing. For future practice, I shall try to perform load tests for projects that require response time. That way, finding code or bugs can be much faster to do.

 

Link to the blog: https://blog.testlodge.com/continuous-integration-testing/

From the blog CS@Worcester – Onwards to becoming an expert developer by dtran365 and used with permission of the author. All other rights reserved by the author.

The Abstract Factory Design Pattern

This post on DZone talks about the abstract factory design pattern and gives an example implementation in Java using geometric shapes. This pattern is similar to the simple factory with the idea of constructing objects in factories instead of just doing so in a client class. It differs in that this abstract version allows you to have an abstract factory base that allows multiple implementations for more specific versions of the same original type of object. It also differs in that you actually create an instance of a factory object instead of just creating different objects within the factory class as in the simple factory.

I like the concept of this pattern more than just having a simple class that creates multiple instances of different objects such as the simple factory. I also like how the design allows you to have multiple types of objects that can split off into different more specific types, such as how the example Java implementation has 2D shapes and 3D shape types and factories for each kind. The design appears to be efficient, especially in the implementation example, only creating a factory for a type of object when it matches a specific type in the client call. Like the other factory pattern, you can also easily implement other design patterns for the object itself such as a strategy or singleton, which further would improve the final outcome. Another aspect of this pattern that I like is that the client itself is not creating the objects, it just calls the factory get method from a provider class that sits between the factory and the client.

I definitely like this pattern and will certainly consider using it the next time I have to create a program with many different variations of the same objects such as shapes or ducks as seen in previous programming examples. It will be especially useful to use this design if I am trying to type check the objects from user input to make sure they are trying to create a valid type of object with the factory. Overall, I am finding that as I read more articles about design patterns, especially for many objects of the same base, I am gaining a better understanding of how to maximize the program efficiency with one or multiple design patterns.

Source: https://dzone.com/articles/abstract-factory-design-pattern

From the blog CS@Worcester – Chris' Computer Science Blog by cradkowski and used with permission of the author. All other rights reserved by the author.

Figuring Out What They Expected

This week I read a post of Joel Spolsky, the CEO of Stack Overflow. This post talks about the term “user model”, “program model”, and making program model conform user model in a program. User model is users’ mental understanding of what the program is doing for them. When a new user starts using a program, they do not come with a completely clean slate. They have some expectations of how the program is going to work. If they’ve used similar software before, they will think it’s going to work like that other software. If they’ve used any software before, they are going to think that your software conforms to certain common conventions. They may have intelligent guesses about how the UI is going to work. Similarly, program model that is a program’s “mental model” is encoded in bits and will be executed faithfully by the CPU. If the program model corresponds to the user model, you have a successful user interface.

For example, in Microsoft Word (and most word processors), when you put a picture in your document, the picture is actually embedded in the same file as the document itself. After inserting the picture in the document, you can delete the original picture file while the picture will remain in the document. On the contrary, HTML doesn’t let you do this, HTML documents must store their pictures in a separate file. If you take a user who is used to word processors, and doesn’t know anything about HTML, and sit them down in front of a HTML editor like FrontPage, they will almost certainly think that the picture is going to be stored in the file. This is a user model. Therefore, program user (the picture must be in a separate file) does not conform user model (the picture will be embedded)

If you’re designing a program like FrontPage, you have to create something to bring the program model in line with the user model. You have two choices: changing user model or program model. It looks remarkably hard to change the user model. You could explain things in the manual or pop up a little dialog box explaining that the image file won’t be embedded. However, they are inefficient because of annoying and users not reading them. So, the best choice is almost always going to be to change the program model, not the user model. Perhaps when they insert the picture, you could make a copy of the picture in a subdirectory beneath the document file.

You can find user model in certain circumstances by asking some users what they think is happening after you describe the situation. Then, you figure out what they expect. The popular choice is the best user model, and it’s up to you to make the program model match it. Actually, you do not have to test on too many users or have a formal usability lab. In some cases, only five or six users next to you are enough because after that, you start seeing the same results again and again, and any additional users are just a waste of time. User models aren’t very complex. When people have to guess how a program is going to work, they tend to guess simple things, rather than complicated things.

It’s hard enough to make the program model conform to the user model when the models are simple. When the models become complex, it’s even getting harder. So, you should pick the simplest possible model.

Article: https://www.joelonsoftware.com/2000/04/11/figuring-out-what-they-expected/

From the blog CS@Worcester – ThanhTruong by ttruong9 and used with permission of the author. All other rights reserved by the author.

Path Testing in Software

Hello! Today’s topic of discussion will be path testing in software development. The article that is up for discussion today is “Path Testing: The Coverage” by Jeff Nyman. So let’s get right into it. What exactly is path testing? Path testing is a method of testing that involves traversing through the code in a linear fashion to ensure that the entire program gets test coverage. The point of path testing is to make a graphs that represent your tests. This is done by graphing out your program by nodes, which are used to represent different lines of code/methods/bodies of code. The connections between the nodes represent their linear relationship. If a program graph is made correctly, you should easily be able to identify the flow of the program and things such as loops should be distinctly represented. So what’s the big deal with testing using these program graphs? When we are testing we want to make sure we test all of our code and our relationships between different parts of code. If we use nodes as these different parts of our code, we can test using the program graph layout by making sure that every node in our program graph is traversed during testing, as well as every relationship between the nodes. This is an important concept to ensure that your program gets 100% testing coverage, and to make sure that the different parts of the code work together as they properly should.

I have drafted a few different program graphs for programs in my testing class, and I have to say that they make the objective of the code entirely clear. By objective, I mean I can tell exactly what is supposed to happen in the code and the exact order of execution. Loops are entirely clear in these graphs because the are represented by a line going from one node looping back up to a node above that. If there are 2 nodes in between the first node and the node that loops up to the first node, then I know that nodes 1-4 are a loop of some sort, and that nodes 2 & 3 are some sort of method body that do some action within the loop while node 1 is the beginning of the loop and node 4 is the exit condition node. You can test the edges between the nodes to ensure the different relationships for the nodes are correct as well as test the nodes themselves to ensure they work as desired. I think that program graph testing is a great way to visualize testing and I hope to use it a lot in the future.

Here’s the link: http://testerstories.com/2014/06/path-testing-the-coverage/

 

From the blog CS@Worcester – The Average CS Student by Nathan Posterro and used with permission of the author. All other rights reserved by the author.

Dynamic Test Process

Source: https://www.guru99.com/dynamic-testing.html

This week’s reading is about a dynamic testing tutorial written by Radhika Renamala. It is stated to be a software testing technique where the dynamic behavior is being parsed. An example provided is a simple login page which requires input from the end user for a username and password. When the user enters in either a password or username there is an expected behavior based on the input. By comparing the actual behavior to the expected behavior, you are working with the system to find errors in the code. This article also provides a dynamic testing process in the order of test case design and implementation, test environment setup, test execution, and bug reporting. The first step is simply identifying the features to be tested and deriving test cases and conditions for them. Then setup the tests to be executed, execute the tests then document the findings. By using this method, it can reveal hidden bugs that can’t be found by static testing.

This reading was interesting because I thought it was a simple test process than what is written in this article. Personally, I thought by randomly inputting values and observing the output would be sufficient. However, the simplified steps aren’t as simple as there are necessary considerations. In the article, there is a warning given to the reader by the author that other factors should be considered before jumping into dynamic testing. Two of the most important would be time and resource as they will make or break the efficiency of running these tests. Unlike static testing as I have learned, which is more based around creating tests around the code provided by the user. This allows them to easily create tests that are clearly related to the code. But this does not allow them to think outside the box where dynamic testing allows testers to do so. This type of testing will certainly create abnormal situations that can generate a bug. As stated in the article, this is type of testing is useful for increasing the quality of your product. By reading this article, I can see why the author concluded that using both static and dynamic in conjunction with each other is a good way to properly deliver a quality product.

From the blog CS@Worcester – Progression through Computer Science and Beyond… by Johnny To and used with permission of the author. All other rights reserved by the author.

Stress Testing and a Few Implementations

For this particular post, I was in the mood to cover something that we haven’t specifically covered in the class, stress testing. I found a post on guru99.com that covers stress testing as a whole. The article covers aspects including the need for stress testing, as well as different types of implementations of stress testing. I particularly enjoyed this post because it covers a broad realm of topics within this particular area of testing, while being easily understandable for someone who may have no familiarity with the concept whatsoever.

Stress testing is often associated with websites, and mobile applications that may experience abnormal traffic surges during some predictable times and sometimes completely unpredictable times. Stress testing ensures that a system works properly under intense traffic, as well as displays possible warning messages to alert the appropriate people that the system is under stress. The post points out that the main end goal of stress testing is to ensure that the system recovers properly after failure.

A few different means of stress testing are, application stress testing, systemic stress testing, and exploratory stress testing. Application stress testing is pretty self explanatory, this test basically looks to find any bottlenecks in the application that may be vulnerable under stress. Systemic stress testing tests multiple systems on the same server to find bottlenecks of data blocking between applications. Exploratory stress testing tests for some possible edge cases that are probably unlikely to ever occur, but should still be tested for. The post gives examples of a large number of users logging in at the same time and when a particularly large amount of data is inserted into the database at once.

I knew that this kind of testing had to exist because of my experience with certain applications. Years ago, when a new Call of Duty game would come out, it was often unplayable for the first day or two, due to the network system being completely offline or simply unstable. Now, presumingly they have figured out their stress testing as the service does not go offline on release and hasn’t for several years. Personally, I don’t know if I will particularly be involved with stress testing in my career but the exposure to this area of testing cannot hurt. I do recommend that people take a look at this post on guru99.

Here is a link to the original post: https://www.guru99.com/stress-testing-tutorial.html

 

From the blog CS@Worcester – The Road to Software Engineering by Stephen Burke and used with permission of the author. All other rights reserved by the author.

3 biggest roadblocks to continuous testing

https://www.infoworld.com/article/3294197/application-testing/3-biggest-roadblocks-to-continuous-testing.html

Continuous testing is the process of executing automated tests as part of the software delivery pipeline to obtain feedback on the buisness risks associated with a software release candidate as rapidly as possible. Many orgaizations have experimented with this testing automation but many of them only celebrate small amounts of success but the process is never expanded upon and that is due to three factors, time and resources, complexity, and results. Teams notoriously underestimate the amount of time and resources required for continuous testing. Teams often create simple UI tests but do not plan ahead for all of the other issues that pop up such as numerous false positives. It is important to keep individual tests synced with the broader test framework while also creating new tests for every new or modified requirement all while trying to automate more advanced cases and keep them running consistently in the continuous testing enviroment. Second, it is extremely difficult to automate certain tasks that require sophisticated set up. You need to make sure that your testing resources understand how to automate tests across different platforms. You need to have secure and compliant test data in order to set up a realistic test as well as drive the test through a complex series of steps.  The third problem is results. The most cited complain with continuous testing is the overwhelming number of false positives that need to be reviewed and addressed. In addition, it can be difficult to interpret the results becasue they do not provide the risk based insight needed to make a decision on whether or not the tested product is ready to be released. The tests results give you the number of successful tests and the number of unsuccessful tests which you can calculate an accuracy from but there are numerous factors that contribute to those unsuccessful tests along with false positives, ultimately the tests will not help you conclude which part is wrong and what needs to bee fixed. Release decisions need to be made quickly and the unclear results make that more difficult.

Looking at these reasons I find continous testing to be a poor choice when it comes to actually trying to test everything in a system. Continuous testing is more for speeding up the proccess for a company rather than releasing a finished product. In a perfect world, the company would allow for enough time to let the team test thoroughly but when it is a race to release a product before you competition, continous testing may be your only option.

From the blog CS-443 – Timothy Montague Blog by Timothy Montague and used with permission of the author. All other rights reserved by the author.

AntiPatterns: Lava Flow

If you’ve ever contributed to Open Source projects (or ever tried to code pretty much ever), the concept of Lava Flow probably isn’t very foreign to you. 

Throughout the course of the development of a piece of software, teams take different approaches to achieve their end goal. Sometimes management changes, sometimes the scope or specification of the project is altered, or sometimes the team simply decides on a different approach. These changes leave behind stray branches of code that may not necessarily align with the intent of the final product. These are what we refer to as “Lava Flows”.

If the current development pathway is a bright, burning, flowing stream of lava, then old, stagnant, dead code hanging around in the program is a hardened, basalt-like run-off that is lingering in the final product. 

Lava Flow adds immense complexity to the program and makes it significantly more difficult to refactor — in fact, that difficulty to refactor is largely what causes lava flow in the first place. Programmers don’t want to touch code that they don’t fully understand because they run the risk of interfering with the functionality of the working product. In a way, the flows grow exponentially because developers create loose work arounds to implement pieces of code that may not even need to be implemented at all. Over time, the flow hardens into a part of the final product, even if it contributes nothing at all to the flowing development path.

So what can we do about this AntiPattern? The first and perhaps most important thing is to make sure that developers take the time to write easy to read, well documented code that is actually relevant to the final project. It is far more important to take a preventative approach to Lava Flow, because refactoring massive chunks of code takes an exorbitant amount of time and money. When large amounts of code are removed, it’s important to understand why any bugs that pop up are happening. Looking for quick solutions without have a full grasp of the problems will just continue to accentuate the original problem.

I found out this AntiPattern through SourceMaking.com’s blog on it, which delves much deeper into the causes, problems, and solutions involved with this it. I strongly recommend checking out that post as well, as it is far more elaborate than mine and gives great real-world examples along with it.

From the blog CS@Worcester – James Blash by jwblash and used with permission of the author. All other rights reserved by the author.

Data vs Information

I am writing in response to the blog post at https://www.guru99.com/difference-information-data.html titled “Difference between Information and Data”. This is a topic that was covered in my Database Design and Applications class, but it is still a useful distinction to be familiar with for the purposes of software testing.

The blog post firstly individually defines data and information. Data is defined as a raw, unorganized collection of numbers, symbols, images, etc. that is to be processed and has no inherent meaning in and of itself. Information, on the other hand, is defined as data that has been processed, organized, and given meaning. My general understanding of the difference between information and data is that data is information without context, and that seems to be consistent with what how the blog post is explaining it.

There is a long table of contrasting descriptions of different attributes of data and information listed on the blog post; data has no specific purpose, it is in a raw format, it is not directly useful and has no significance, whereas information is purposeful, dependent on data, organized and significant. One particular property is labeled “knowledge level”, where data is described as “low level knowledge” and information is said to be “the second level of knowledge.” I have never considered “levels” of knowledge before, this seems to suggest that there are multiple additional categories. It later mentions “knowledge” and “wisdom” as additional categories. DIKW (Data Information Knowledge Wisdom) is explained, which is something I have never heard of before. It is a model used for discussing these categories and the relationships between them. An example it provides lists an example value for data as “100”, information as “100 miles”, knowledge as “100 miles is a far distance”, and wisdom as “It is difficult to walk 100 miles by any person, but vehicle transport is okay.” These additional levels of knowledge seem to further process and contextualize the information. I think that it would have been interesting if it expanded further on these topics and how things like knowledge and wisdom are significant independently from data and information, and whether further levels of knowledge exist even beyond that.

From the blog CS@Worcester – klapointe blog by klapointe2 and used with permission of the author. All other rights reserved by the author.

JavaScript vs Typescript

I am writing in response to the blog post at https://www.guru99.com/typescript-vs-javascript.html titled “Typescript vs JavaScript: What’s the Difference?”. This is particularly relevant to our CS 343 Software Construction, Design and Architecture class because our final projects are written using Typescript.

The blog post starts out by describing what JavaScript and Typescript are. JavaScript is described as a scripting language meant for front end web development for interactive web pages, and it states that it is not meant for large applications – only for applications with a few hundred lines of code. I think the google home page source code would like to disagree with that, with its thousands of lines of condensed minified JS code running behind its seemingly plain surface, but given the speed of JavaScript in relation to faster languages, it makes sense that it was never actually intended to be used for large applications. The blog post moves on to explain what Typescript is about. Typescript is a JavaScript development language that is compiled to JavaScript code and provides optional typing and type safety.

A list of reasons why to use JavaScript and why to use Typescript are provided, but they are not in opposition; the reasons to use JavaScript are not reasons to not use Typescript, they are really just descriptions of JavaScript. JavaScript is a useful language, and Typescript is a useful extension of JavaScript. After a history of the languages are given, a table of comparisons is provided. Typescript has types and interfaces, JavaScript does not. Typescript supposedly has a steeper learning curve, but given that plain JavaScript syntax will work when writing Typescript, the learning curve does not seem steeper necessarily, only longer, given the additional functionality that is offered by Typescript. Similarly, Typescript not having a community of developers as large as JavaScript’s seems to not be significant given that a programmer writing in Typescript may gain just as much utility from the community of JavaScript developers as the community of Typescript developers, given how similar the languages are. An interesting factoid at the end is that Typescript developers have a higher average salary than JavaScript developers, by about a third.

From the blog cs-wsu – klapointe blog by klapointe2 and used with permission of the author. All other rights reserved by the author.