Stress Testing and a Few Implementations

For this particular post, I was in the mood to cover something that we haven’t specifically covered in the class, stress testing. I found a post on guru99.com that covers stress testing as a whole. The article covers aspects including the need for stress testing, as well as different types of implementations of stress testing. I particularly enjoyed this post because it covers a broad realm of topics within this particular area of testing, while being easily understandable for someone who may have no familiarity with the concept whatsoever.

Stress testing is often associated with websites, and mobile applications that may experience abnormal traffic surges during some predictable times and sometimes completely unpredictable times. Stress testing ensures that a system works properly under intense traffic, as well as displays possible warning messages to alert the appropriate people that the system is under stress. The post points out that the main end goal of stress testing is to ensure that the system recovers properly after failure.

A few different means of stress testing are, application stress testing, systemic stress testing, and exploratory stress testing. Application stress testing is pretty self explanatory, this test basically looks to find any bottlenecks in the application that may be vulnerable under stress. Systemic stress testing tests multiple systems on the same server to find bottlenecks of data blocking between applications. Exploratory stress testing tests for some possible edge cases that are probably unlikely to ever occur, but should still be tested for. The post gives examples of a large number of users logging in at the same time and when a particularly large amount of data is inserted into the database at once.

I knew that this kind of testing had to exist because of my experience with certain applications. Years ago, when a new Call of Duty game would come out, it was often unplayable for the first day or two, due to the network system being completely offline or simply unstable. Now, presumingly they have figured out their stress testing as the service does not go offline on release and hasn’t for several years. Personally, I don’t know if I will particularly be involved with stress testing in my career but the exposure to this area of testing cannot hurt. I do recommend that people take a look at this post on guru99.

Here is a link to the original post: https://www.guru99.com/stress-testing-tutorial.html

 

From the blog CS@Worcester – The Road to Software Engineering by Stephen Burke and used with permission of the author. All other rights reserved by the author.

Why proxy pattern?

You probably heard or saw the term “proxy” multiple times in your browser or your OS, and just like me, you did not know what it means or what does it do. In computer science, that is a term to describe a design pattern and I think it is one of the most interesting design pattern. It is such a simple concept, yet effective and it has been used by a lot of developers, especially in the field of networking.

For starter, according to the Gang of Four, proxy pattern is categorized as Structural design pattern. Its purpose is to act as a simple wrapper for another objects. In other word, the proxy object can be directly accessed by user and it can perform its own logic or configuration changes required by the underlying subject object without having to give the direct access to the subject. By using this pattern, it offers both developers and users the advantages. This pattern is used when commonly used to hide the internal structure and only enable user / client to access a certain part of it, or access control. It can also be used to replace complex or and heavy objects as a skeleton representation. Less popular function of proxy pattern is to prevent data redundancy(ie. load data from disk multiple time) every time the object is called.

The concept of proxy is actually used a lot the real world. Taking the example of a debit card, it is one of the most commonly thing that used this pattern. Every time you swipe or check out with debit card, you are basically telling the system to transfer the money from your bank account to the vendor that you are paying. You can really see that the debit card does not have the trading value, it just represent the amount of money you have in your bank account so that you don’t have to carry that amount around.

This applies the same way in computer science. For instance, if you are working on a project where you have to have multiple server to run on separate subdomain, but unfortunately, you only have one server machine, then reverse proxy is definitely a way to go. In Object Oriented Programming, access an object through another object is also really common and useful to ensure security and convenience.

From the blog #Khoa'sCSBlog by and used with permission of the author. All other rights reserved by the author.

3 biggest roadblocks to continuous testing

https://www.infoworld.com/article/3294197/application-testing/3-biggest-roadblocks-to-continuous-testing.html

Continuous testing is the process of executing automated tests as part of the software delivery pipeline to obtain feedback on the buisness risks associated with a software release candidate as rapidly as possible. Many orgaizations have experimented with this testing automation but many of them only celebrate small amounts of success but the process is never expanded upon and that is due to three factors, time and resources, complexity, and results. Teams notoriously underestimate the amount of time and resources required for continuous testing. Teams often create simple UI tests but do not plan ahead for all of the other issues that pop up such as numerous false positives. It is important to keep individual tests synced with the broader test framework while also creating new tests for every new or modified requirement all while trying to automate more advanced cases and keep them running consistently in the continuous testing enviroment. Second, it is extremely difficult to automate certain tasks that require sophisticated set up. You need to make sure that your testing resources understand how to automate tests across different platforms. You need to have secure and compliant test data in order to set up a realistic test as well as drive the test through a complex series of steps.  The third problem is results. The most cited complain with continuous testing is the overwhelming number of false positives that need to be reviewed and addressed. In addition, it can be difficult to interpret the results becasue they do not provide the risk based insight needed to make a decision on whether or not the tested product is ready to be released. The tests results give you the number of successful tests and the number of unsuccessful tests which you can calculate an accuracy from but there are numerous factors that contribute to those unsuccessful tests along with false positives, ultimately the tests will not help you conclude which part is wrong and what needs to bee fixed. Release decisions need to be made quickly and the unclear results make that more difficult.

Looking at these reasons I find continous testing to be a poor choice when it comes to actually trying to test everything in a system. Continuous testing is more for speeding up the proccess for a company rather than releasing a finished product. In a perfect world, the company would allow for enough time to let the team test thoroughly but when it is a race to release a product before you competition, continous testing may be your only option.

From the blog CS-443 – Timothy Montague Blog by Timothy Montague and used with permission of the author. All other rights reserved by the author.

Python vs R

For this week’s blog I will be taking a look at more of the data analysis side of the computer science world with a comparison of R vs Python. When looking to get into data science, business intelligence, or predictive analytics, we often hear of the two programming languages, but we don’t necessarily know which one to learn or use in different situations.

The R language is a statistical and visualization language that was developed in 1992. R has a rich library that makes it perfect for statistical analysis and analytical work. Python, on the other hand, is a software development language that is based on C. Python can be used to deploy and implement machine learning at a large-scale and can do similar tasks as R. The usability for the two languages is clear that Python is better for data manipulation and repeated tasks while R is better for ad-hoc analysis and general exploration of data sets. Python, being more of a general programming language, is the go to form Machine Learning while R is better at answering statistical problems.

R comes with many different abilities in terms of data visualization, which can be both static or interactive. R packages such as Plotly, Highcharter, and Dygraphs allow the user to interact with the data. Python has libraries such as SciKit-Learn, scipy, numpy, and matplotlib. Matplotlib is the standar Python library that is used to create 2D plots and graphs while numpy is used for scientific computing.

Although R has always been the favorite for data scientists and analysts recently Python has gained major popularity. Over the last few years, Python has risen in popularity by over 10 percent total while the use of R has fallen about 5 percent. Since R is more difficult to learn than Python, the general consensus is that the seasoned data scientist uses R, while the entry-level new generation of data analysts prefer Python extensively.

In the end, Python is a clear better choice for machine learning due to is flexibility, especially if the data analysis tasks need to be integrated with web applications. If you have the need for rapid prototyping and working with datasets to build machine learning models or require statistical analysis of a dataset, R can be used much easier.

https://www.datasciencecentral.com/profiles/blogs/r-vs-python-meta-review-on-usability-popularity-pros-amp-cons

From the blog CS@Worcester – Jarrett's Computer Science Blog by stonecsblog and used with permission of the author. All other rights reserved by the author.

AntiPatterns: Lava Flow

If you’ve ever contributed to Open Source projects (or ever tried to code pretty much ever), the concept of Lava Flow probably isn’t very foreign to you. 

Throughout the course of the development of a piece of software, teams take different approaches to achieve their end goal. Sometimes management changes, sometimes the scope or specification of the project is altered, or sometimes the team simply decides on a different approach. These changes leave behind stray branches of code that may not necessarily align with the intent of the final product. These are what we refer to as “Lava Flows”.

If the current development pathway is a bright, burning, flowing stream of lava, then old, stagnant, dead code hanging around in the program is a hardened, basalt-like run-off that is lingering in the final product. 

Lava Flow adds immense complexity to the program and makes it significantly more difficult to refactor — in fact, that difficulty to refactor is largely what causes lava flow in the first place. Programmers don’t want to touch code that they don’t fully understand because they run the risk of interfering with the functionality of the working product. In a way, the flows grow exponentially because developers create loose work arounds to implement pieces of code that may not even need to be implemented at all. Over time, the flow hardens into a part of the final product, even if it contributes nothing at all to the flowing development path.

So what can we do about this AntiPattern? The first and perhaps most important thing is to make sure that developers take the time to write easy to read, well documented code that is actually relevant to the final project. It is far more important to take a preventative approach to Lava Flow, because refactoring massive chunks of code takes an exorbitant amount of time and money. When large amounts of code are removed, it’s important to understand why any bugs that pop up are happening. Looking for quick solutions without have a full grasp of the problems will just continue to accentuate the original problem.

I found out this AntiPattern through SourceMaking.com’s blog on it, which delves much deeper into the causes, problems, and solutions involved with this it. I strongly recommend checking out that post as well, as it is far more elaborate than mine and gives great real-world examples along with it.

From the blog CS@Worcester – James Blash by jwblash and used with permission of the author. All other rights reserved by the author.

solid, grasp, and other basic principles of object-oriented design

Today’s topic for blog 6 is solid, grasp, and other basic principles of object-oriented design. This article teaches you about basic about solid and grasp languages. First, code should have the following qualities maintainability, extensibility, and modularity. These qualities are usually in code that are easy to maintain, extend, and modularize over its lifetime. This article has many examples written in java and c#. These are the principle that the article goes through:

Single responsibility principle which states that a class should have only one responsibility and a class a fulfills its responsibilities by using its functions.

Open-closed principle which states a software module (class or method) should be open for extension but for modification.

Liskov substitution principle which states derived classes must be substitutable for their base classes. What this basically means is that abstraction should be enough for a client.

Interface segregation principle which states clients should not be forced to depend upon the interfaces that they do not use.

Dependency inversion principle which states program to an interface, not to an implementation.

Hollywood principle which helps prevent dependency rot. It states that a high-level component can dictate low-level components in a manner so that neither one is dependent upon the other.

Polymorphism which they say it is a design principle.

Information expert which helps you to give responsibilities to classes.

Creator which is a grasp principle that helps decide which class should be responsible for instantiating a class.

Pure fabrication which reduces the cohesiveness of the domain classes.

Controller which is a design principle that helps minimizing the dependency between gui components and the domain model classes.

And favor composition over inheritance and indirection.

All these topics were provided examples and it was very easy to understand. I thought this article was very informative. I think it provides the basic main idea of these principles. After reading this article I changed the way I work because I now know how to apply these principles to improve my code. I agree with everything in this article but if I would to improve it, I would say to break up grasp and solid to separate articles. All In all, this article was  very useful in learning the basic about different principles and how to apply them.

https://dzone.com/articles/solid-grasp-and-other-basic-principles-of-object-o

From the blog CS@Worcester – Phan's CS by phancs and used with permission of the author. All other rights reserved by the author.

Data vs Information

I am writing in response to the blog post at https://www.guru99.com/difference-information-data.html titled “Difference between Information and Data”. This is a topic that was covered in my Database Design and Applications class, but it is still a useful distinction to be familiar with for the purposes of software testing.

The blog post firstly individually defines data and information. Data is defined as a raw, unorganized collection of numbers, symbols, images, etc. that is to be processed and has no inherent meaning in and of itself. Information, on the other hand, is defined as data that has been processed, organized, and given meaning. My general understanding of the difference between information and data is that data is information without context, and that seems to be consistent with what how the blog post is explaining it.

There is a long table of contrasting descriptions of different attributes of data and information listed on the blog post; data has no specific purpose, it is in a raw format, it is not directly useful and has no significance, whereas information is purposeful, dependent on data, organized and significant. One particular property is labeled “knowledge level”, where data is described as “low level knowledge” and information is said to be “the second level of knowledge.” I have never considered “levels” of knowledge before, this seems to suggest that there are multiple additional categories. It later mentions “knowledge” and “wisdom” as additional categories. DIKW (Data Information Knowledge Wisdom) is explained, which is something I have never heard of before. It is a model used for discussing these categories and the relationships between them. An example it provides lists an example value for data as “100”, information as “100 miles”, knowledge as “100 miles is a far distance”, and wisdom as “It is difficult to walk 100 miles by any person, but vehicle transport is okay.” These additional levels of knowledge seem to further process and contextualize the information. I think that it would have been interesting if it expanded further on these topics and how things like knowledge and wisdom are significant independently from data and information, and whether further levels of knowledge exist even beyond that.

From the blog CS@Worcester – klapointe blog by klapointe2 and used with permission of the author. All other rights reserved by the author.

JavaScript vs Typescript

I am writing in response to the blog post at https://www.guru99.com/typescript-vs-javascript.html titled “Typescript vs JavaScript: What’s the Difference?”. This is particularly relevant to our CS 343 Software Construction, Design and Architecture class because our final projects are written using Typescript.

The blog post starts out by describing what JavaScript and Typescript are. JavaScript is described as a scripting language meant for front end web development for interactive web pages, and it states that it is not meant for large applications – only for applications with a few hundred lines of code. I think the google home page source code would like to disagree with that, with its thousands of lines of condensed minified JS code running behind its seemingly plain surface, but given the speed of JavaScript in relation to faster languages, it makes sense that it was never actually intended to be used for large applications. The blog post moves on to explain what Typescript is about. Typescript is a JavaScript development language that is compiled to JavaScript code and provides optional typing and type safety.

A list of reasons why to use JavaScript and why to use Typescript are provided, but they are not in opposition; the reasons to use JavaScript are not reasons to not use Typescript, they are really just descriptions of JavaScript. JavaScript is a useful language, and Typescript is a useful extension of JavaScript. After a history of the languages are given, a table of comparisons is provided. Typescript has types and interfaces, JavaScript does not. Typescript supposedly has a steeper learning curve, but given that plain JavaScript syntax will work when writing Typescript, the learning curve does not seem steeper necessarily, only longer, given the additional functionality that is offered by Typescript. Similarly, Typescript not having a community of developers as large as JavaScript’s seems to not be significant given that a programmer writing in Typescript may gain just as much utility from the community of JavaScript developers as the community of Typescript developers, given how similar the languages are. An interesting factoid at the end is that Typescript developers have a higher average salary than JavaScript developers, by about a third.

From the blog cs-wsu – klapointe blog by klapointe2 and used with permission of the author. All other rights reserved by the author.

Facade Design Pattern

For this week’s blog on Software Architecture and Design, I will revisit the same assignment that I have blogged about before. For the assignment, I had the option between three design patterns to write a tutorial for. I picked the proxy design pattern, and then I blogged about the decorator design pattern. Now, I would like to watch a tutorial on the third design pattern, facade, so that I might learn about all three.
I chose to use the same YouTube, Derek Banas, that I used before for the other blog. I found his videos engaging and informative that I would like to learn about it again. I also like that it is fairly concise (11.5 min), which makes it much easier to rewatch sections that I don’t get the first time around. 
It turns out that I did not understand it after finishing Derek’s video, so I turned to another video by another Youtube channel by Christopher Okhravi. Derek went straight into coding, whereas Christopher just drew diagrams and did not code. I needed more of an overview to understand it, not an example of code.
The thing that confused me about Derek’s example is that I did not see how it was in any way different from code that I have written in the past. In fact, he said, “You may have used this pattern already, but you may not have realized it.” When I watched his video, I did not know why it was so special.
Christopher’s video made it make sense. I used him for the original “proxy tutorial” assignment, and he was the one that made proxy design pattern make sense. His videos tend to run on the longer side. At 16.5 minutes, this one wasn’t too long, but the proxy video was almost forty minutes.
Christopher’s diagrams were helpful to explain what made the facade pattern what it is. I also now understand why it is called a “facade” pattern — one class acts as a “facade” to every other class and interface. The end user only interacts with the facade class, which calls what it needs from the other classes. The advantage to this is everything is highly uncoupled. 
Although I think this concept is something I intuitively knew, it was helpful learning about this. Now I know there is a name for it.
Derek Banas: https://www.youtube.com/watch?v=B1Y8fcYrz5o
Christopher Okhravi: https://www.youtube.com/watch?v=K4FkHVO5iac

From the blog Sam Bryan by and used with permission of the author. All other rights reserved by the author.

A Review of Mockito

For this week’s blog on quality assurance, I wanted to review what we learned most recently in class. I decided to watch a relatively short (24 min) tutorial on Mockito. It has been over a week since I’ve seen it, and there’s another few days until we meet again. I could use the refresher before then.

The tutorial I chose to watch was by a YouTuber named Walter Schilling. I thought he explained the concept very well. I will definitely bookmark his page for other concepts that I find challenging.

I thought it was a little bit of a complicated set up. I don’t think it was necessary to see the UML diagrams or as extensive of a walkthrough of how his code worked. He didn’t go excessively in depth, but I understood it pretty well after he gave a  demonstration of the final code in action. (He typed in some inputs and showed what the output would be.) I didn’t need to know as much information on his example code. That’s not what I had come to see.

When he got to the mockito section, I was surprised at how little there seemed to be to it. I remember when we went over it in class, I didn’t think it was a very difficult concept, but I probably could not have done it without a little bit of review. After watching this tutorial, I have renewed my confidence that I am able to do it again.

I could see how someone might not like that his method wasn’t polished and rehearsed. He would say something such as, “Why is that giving me an error,” or “I don’t think I spelled [my variable name] correctly.” It didn’t take him long to diagnose any of these problems. I kind of liked this style. It gave me more confidence in my own abilities when I could sometimes diagnose something as quick if not quicker than he did on his own example. (To be fair though, most of them were simple fixes.)

Towards the end of his video, all of his tests were failing, and he couldn’t figure out why for a moment. Something small that I gleaned from this is that no matter how good you get, no one is ever perfect. I have a habit of putting myself down for not knowing everything or making simple mistakes. I should not be quite so hard on myself. Even the experts make mistakes. You could go one step further and say that if they never made mistakes, they would never learn from them and become experts.

https://www.youtube.com/watch?v=8PgH0PwgEa8

From the blog Sam Bryan by and used with permission of the author. All other rights reserved by the author.