Category Archives: Week 12

Stress Testing and a Few Implementations

For this particular post, I was in the mood to cover something that we haven’t specifically covered in the class, stress testing. I found a post on guru99.com that covers stress testing as a whole. The article covers aspects including the need for stress testing, as well as different types of implementations of stress testing. I particularly enjoyed this post because it covers a broad realm of topics within this particular area of testing, while being easily understandable for someone who may have no familiarity with the concept whatsoever.

Stress testing is often associated with websites, and mobile applications that may experience abnormal traffic surges during some predictable times and sometimes completely unpredictable times. Stress testing ensures that a system works properly under intense traffic, as well as displays possible warning messages to alert the appropriate people that the system is under stress. The post points out that the main end goal of stress testing is to ensure that the system recovers properly after failure.

A few different means of stress testing are, application stress testing, systemic stress testing, and exploratory stress testing. Application stress testing is pretty self explanatory, this test basically looks to find any bottlenecks in the application that may be vulnerable under stress. Systemic stress testing tests multiple systems on the same server to find bottlenecks of data blocking between applications. Exploratory stress testing tests for some possible edge cases that are probably unlikely to ever occur, but should still be tested for. The post gives examples of a large number of users logging in at the same time and when a particularly large amount of data is inserted into the database at once.

I knew that this kind of testing had to exist because of my experience with certain applications. Years ago, when a new Call of Duty game would come out, it was often unplayable for the first day or two, due to the network system being completely offline or simply unstable. Now, presumingly they have figured out their stress testing as the service does not go offline on release and hasn’t for several years. Personally, I don’t know if I will particularly be involved with stress testing in my career but the exposure to this area of testing cannot hurt. I do recommend that people take a look at this post on guru99.

Here is a link to the original post: https://www.guru99.com/stress-testing-tutorial.html

 

From the blog CS@Worcester – The Road to Software Engineering by Stephen Burke and used with permission of the author. All other rights reserved by the author.

3 biggest roadblocks to continuous testing

https://www.infoworld.com/article/3294197/application-testing/3-biggest-roadblocks-to-continuous-testing.html

Continuous testing is the process of executing automated tests as part of the software delivery pipeline to obtain feedback on the buisness risks associated with a software release candidate as rapidly as possible. Many orgaizations have experimented with this testing automation but many of them only celebrate small amounts of success but the process is never expanded upon and that is due to three factors, time and resources, complexity, and results. Teams notoriously underestimate the amount of time and resources required for continuous testing. Teams often create simple UI tests but do not plan ahead for all of the other issues that pop up such as numerous false positives. It is important to keep individual tests synced with the broader test framework while also creating new tests for every new or modified requirement all while trying to automate more advanced cases and keep them running consistently in the continuous testing enviroment. Second, it is extremely difficult to automate certain tasks that require sophisticated set up. You need to make sure that your testing resources understand how to automate tests across different platforms. You need to have secure and compliant test data in order to set up a realistic test as well as drive the test through a complex series of steps.  The third problem is results. The most cited complain with continuous testing is the overwhelming number of false positives that need to be reviewed and addressed. In addition, it can be difficult to interpret the results becasue they do not provide the risk based insight needed to make a decision on whether or not the tested product is ready to be released. The tests results give you the number of successful tests and the number of unsuccessful tests which you can calculate an accuracy from but there are numerous factors that contribute to those unsuccessful tests along with false positives, ultimately the tests will not help you conclude which part is wrong and what needs to bee fixed. Release decisions need to be made quickly and the unclear results make that more difficult.

Looking at these reasons I find continous testing to be a poor choice when it comes to actually trying to test everything in a system. Continuous testing is more for speeding up the proccess for a company rather than releasing a finished product. In a perfect world, the company would allow for enough time to let the team test thoroughly but when it is a race to release a product before you competition, continous testing may be your only option.

From the blog CS-443 – Timothy Montague Blog by Timothy Montague and used with permission of the author. All other rights reserved by the author.

AntiPatterns: Lava Flow

If you’ve ever contributed to Open Source projects (or ever tried to code pretty much ever), the concept of Lava Flow probably isn’t very foreign to you. 

Throughout the course of the development of a piece of software, teams take different approaches to achieve their end goal. Sometimes management changes, sometimes the scope or specification of the project is altered, or sometimes the team simply decides on a different approach. These changes leave behind stray branches of code that may not necessarily align with the intent of the final product. These are what we refer to as “Lava Flows”.

If the current development pathway is a bright, burning, flowing stream of lava, then old, stagnant, dead code hanging around in the program is a hardened, basalt-like run-off that is lingering in the final product. 

Lava Flow adds immense complexity to the program and makes it significantly more difficult to refactor — in fact, that difficulty to refactor is largely what causes lava flow in the first place. Programmers don’t want to touch code that they don’t fully understand because they run the risk of interfering with the functionality of the working product. In a way, the flows grow exponentially because developers create loose work arounds to implement pieces of code that may not even need to be implemented at all. Over time, the flow hardens into a part of the final product, even if it contributes nothing at all to the flowing development path.

So what can we do about this AntiPattern? The first and perhaps most important thing is to make sure that developers take the time to write easy to read, well documented code that is actually relevant to the final project. It is far more important to take a preventative approach to Lava Flow, because refactoring massive chunks of code takes an exorbitant amount of time and money. When large amounts of code are removed, it’s important to understand why any bugs that pop up are happening. Looking for quick solutions without have a full grasp of the problems will just continue to accentuate the original problem.

I found out this AntiPattern through SourceMaking.com’s blog on it, which delves much deeper into the causes, problems, and solutions involved with this it. I strongly recommend checking out that post as well, as it is far more elaborate than mine and gives great real-world examples along with it.

From the blog CS@Worcester – James Blash by jwblash and used with permission of the author. All other rights reserved by the author.

Data vs Information

I am writing in response to the blog post at https://www.guru99.com/difference-information-data.html titled “Difference between Information and Data”. This is a topic that was covered in my Database Design and Applications class, but it is still a useful distinction to be familiar with for the purposes of software testing.

The blog post firstly individually defines data and information. Data is defined as a raw, unorganized collection of numbers, symbols, images, etc. that is to be processed and has no inherent meaning in and of itself. Information, on the other hand, is defined as data that has been processed, organized, and given meaning. My general understanding of the difference between information and data is that data is information without context, and that seems to be consistent with what how the blog post is explaining it.

There is a long table of contrasting descriptions of different attributes of data and information listed on the blog post; data has no specific purpose, it is in a raw format, it is not directly useful and has no significance, whereas information is purposeful, dependent on data, organized and significant. One particular property is labeled “knowledge level”, where data is described as “low level knowledge” and information is said to be “the second level of knowledge.” I have never considered “levels” of knowledge before, this seems to suggest that there are multiple additional categories. It later mentions “knowledge” and “wisdom” as additional categories. DIKW (Data Information Knowledge Wisdom) is explained, which is something I have never heard of before. It is a model used for discussing these categories and the relationships between them. An example it provides lists an example value for data as “100”, information as “100 miles”, knowledge as “100 miles is a far distance”, and wisdom as “It is difficult to walk 100 miles by any person, but vehicle transport is okay.” These additional levels of knowledge seem to further process and contextualize the information. I think that it would have been interesting if it expanded further on these topics and how things like knowledge and wisdom are significant independently from data and information, and whether further levels of knowledge exist even beyond that.

From the blog CS@Worcester – klapointe blog by klapointe2 and used with permission of the author. All other rights reserved by the author.

JavaScript vs Typescript

I am writing in response to the blog post at https://www.guru99.com/typescript-vs-javascript.html titled “Typescript vs JavaScript: What’s the Difference?”. This is particularly relevant to our CS 343 Software Construction, Design and Architecture class because our final projects are written using Typescript.

The blog post starts out by describing what JavaScript and Typescript are. JavaScript is described as a scripting language meant for front end web development for interactive web pages, and it states that it is not meant for large applications – only for applications with a few hundred lines of code. I think the google home page source code would like to disagree with that, with its thousands of lines of condensed minified JS code running behind its seemingly plain surface, but given the speed of JavaScript in relation to faster languages, it makes sense that it was never actually intended to be used for large applications. The blog post moves on to explain what Typescript is about. Typescript is a JavaScript development language that is compiled to JavaScript code and provides optional typing and type safety.

A list of reasons why to use JavaScript and why to use Typescript are provided, but they are not in opposition; the reasons to use JavaScript are not reasons to not use Typescript, they are really just descriptions of JavaScript. JavaScript is a useful language, and Typescript is a useful extension of JavaScript. After a history of the languages are given, a table of comparisons is provided. Typescript has types and interfaces, JavaScript does not. Typescript supposedly has a steeper learning curve, but given that plain JavaScript syntax will work when writing Typescript, the learning curve does not seem steeper necessarily, only longer, given the additional functionality that is offered by Typescript. Similarly, Typescript not having a community of developers as large as JavaScript’s seems to not be significant given that a programmer writing in Typescript may gain just as much utility from the community of JavaScript developers as the community of Typescript developers, given how similar the languages are. An interesting factoid at the end is that Typescript developers have a higher average salary than JavaScript developers, by about a third.

From the blog cs-wsu – klapointe blog by klapointe2 and used with permission of the author. All other rights reserved by the author.

Facade Design Pattern

For this week’s blog on Software Architecture and Design, I will revisit the same assignment that I have blogged about before. For the assignment, I had the option between three design patterns to write a tutorial for. I picked the proxy design pattern, and then I blogged about the decorator design pattern. Now, I would like to watch a tutorial on the third design pattern, facade, so that I might learn about all three.
I chose to use the same YouTube, Derek Banas, that I used before for the other blog. I found his videos engaging and informative that I would like to learn about it again. I also like that it is fairly concise (11.5 min), which makes it much easier to rewatch sections that I don’t get the first time around. 
It turns out that I did not understand it after finishing Derek’s video, so I turned to another video by another Youtube channel by Christopher Okhravi. Derek went straight into coding, whereas Christopher just drew diagrams and did not code. I needed more of an overview to understand it, not an example of code.
The thing that confused me about Derek’s example is that I did not see how it was in any way different from code that I have written in the past. In fact, he said, “You may have used this pattern already, but you may not have realized it.” When I watched his video, I did not know why it was so special.
Christopher’s video made it make sense. I used him for the original “proxy tutorial” assignment, and he was the one that made proxy design pattern make sense. His videos tend to run on the longer side. At 16.5 minutes, this one wasn’t too long, but the proxy video was almost forty minutes.
Christopher’s diagrams were helpful to explain what made the facade pattern what it is. I also now understand why it is called a “facade” pattern — one class acts as a “facade” to every other class and interface. The end user only interacts with the facade class, which calls what it needs from the other classes. The advantage to this is everything is highly uncoupled. 
Although I think this concept is something I intuitively knew, it was helpful learning about this. Now I know there is a name for it.
Derek Banas: https://www.youtube.com/watch?v=B1Y8fcYrz5o
Christopher Okhravi: https://www.youtube.com/watch?v=K4FkHVO5iac

From the blog Sam Bryan by and used with permission of the author. All other rights reserved by the author.

A Review of Mockito

For this week’s blog on quality assurance, I wanted to review what we learned most recently in class. I decided to watch a relatively short (24 min) tutorial on Mockito. It has been over a week since I’ve seen it, and there’s another few days until we meet again. I could use the refresher before then.

The tutorial I chose to watch was by a YouTuber named Walter Schilling. I thought he explained the concept very well. I will definitely bookmark his page for other concepts that I find challenging.

I thought it was a little bit of a complicated set up. I don’t think it was necessary to see the UML diagrams or as extensive of a walkthrough of how his code worked. He didn’t go excessively in depth, but I understood it pretty well after he gave a  demonstration of the final code in action. (He typed in some inputs and showed what the output would be.) I didn’t need to know as much information on his example code. That’s not what I had come to see.

When he got to the mockito section, I was surprised at how little there seemed to be to it. I remember when we went over it in class, I didn’t think it was a very difficult concept, but I probably could not have done it without a little bit of review. After watching this tutorial, I have renewed my confidence that I am able to do it again.

I could see how someone might not like that his method wasn’t polished and rehearsed. He would say something such as, “Why is that giving me an error,” or “I don’t think I spelled [my variable name] correctly.” It didn’t take him long to diagnose any of these problems. I kind of liked this style. It gave me more confidence in my own abilities when I could sometimes diagnose something as quick if not quicker than he did on his own example. (To be fair though, most of them were simple fixes.)

Towards the end of his video, all of his tests were failing, and he couldn’t figure out why for a moment. Something small that I gleaned from this is that no matter how good you get, no one is ever perfect. I have a habit of putting myself down for not knowing everything or making simple mistakes. I should not be quite so hard on myself. Even the experts make mistakes. You could go one step further and say that if they never made mistakes, they would never learn from them and become experts.

https://www.youtube.com/watch?v=8PgH0PwgEa8

From the blog Sam Bryan by and used with permission of the author. All other rights reserved by the author.

Apprenticeship Pattern: Draw Your Own Map

This week I read the apprenticeship pattern “Draw Your Own Map”, and it is on the the most inspiring patterns that I’ve read so far. This pattern instructs you to do exactly what its title states, and this is big for me because it gives me a sense of control. This pattern asks you to reflect about your current path, your current position and answer the question “is this really where I want to be in the future?”. If this the answer to the previous question is not a yes, and there is no way to alter your current path to fit with the path that you desire, then the author suggests that you leave, even if you are leaving a “great” title or fantastic salary. Of course what is being asked by this pattern is very intimidating and can be very difficult to actually do when the time comes. This is because from the moment we decide to go to school and take on a new career, the path is already laid out for us and society expects us to follow it, that is: get your degree, get a job, stick with that job and try to climb the corporate ladder there. That being said, there is nothing wrong with the climb if you are happy with where you are, the difficulty comes from taking the risk and abandoning what is sure and working for what you want.

I could not agree anymore with this pattern, and having read it it caused me to think about what some of my goals are. Though I enjoy learning about software and will be very happy to enter the field and work as an employee for a few years, one day I would like to start my own company. I’m sure that making the decision to leave a secure source of income behind a taking the financial risk of starting a company will not be an easy one, but that is what this whole pattern is all about. This pattern has changed the way that I think about the software industry and even working as a whole, it’s made me realize that in the end despite all of the technical challenges and setback that we will face in the professional world what matters is doing what you love, and this is a mentality that I will take with me throughout my career.

From the blog CS@Worcester – Site Title by lphilippeau and used with permission of the author. All other rights reserved by the author.

Share What You Learn

I chose this pattern because it is in connection with my previous pattern “record what you learn”.  I also find some useful links to expose your ignorance pattern too. It does made sense to me that after you record what you learn, you share them for other people to benefit from your learning; sharing with others will be easy because you already have them documented. You can share what you learn in many ways such blog post, tutorials and one on one or group discussion with people you think might be interesting in the topic. One on one discussion is sometimes embarrassing if your target group already knows the subject but it is good place to start when you find people interesting in knowing what you learnt.  Sharing is not always easy for me because I always feel like someone has already dealt with what I learnt and no one might need it. But I was proven wrong in one of my presentation as what I shared was straight forward and the people like it very much than what they already knew regarding the subject matter. Also, in the mixed of a team I always try to be first to respond to it if only I have an idea of what is being asked because I know how it helps me in understanding the scope of my knowledge better. This boiled to the point that sharing what you learn will not only make you valuable to the community but will also let you stay on top of your field of study.

Besides, the challenging expect of sharing what you learn is the ethical part of it. Sometimes how to re-factor what you learn into your own work is difficult because it is straight forward knowledge and you can’t go around it to make you own.

After reading this pattern, it has reinforced the power of recording what I learn as it will make it easy for me to share what I learn with others. I also didn’t pay attention to the effects of what I shared in the past and after reading this, it has opened my eyes to look out for the legal, political and financial implication to what I share in the future.

From the blog CS@Worcester – Computer Science Exploration by ioplay and used with permission of the author. All other rights reserved by the author.

Concrete Skills

This week I read the book Apprenticeship Patters guidance for the Aspiring Software Craftsman. There was this one section that caught my attention and it was the Concrete Skills. This was very interesting to me because it’s quote was “Having knowledge is not the same as having the skill and practical ability to apply that knowledge to create software applications. this is where craftsmanship comes in.” Growing up we are taught that you need all the knowledge you can to even be able to be considered for a job. Many people tell us that you need even a masters to be considered for a job these days anymore. However there was a professor I meet back when I first started  programming he told me he used to be the hiring manager for IBM and he meet hundred of different candidates that tried and applied for the job. The things he learned about hiring people is that not everyone is the same.  Yes there will be people who have the knowledge of programming and would pass all these tests and get everything right. However,  once it came to actual coding and work ethics they were very questionable. There are people who got out of college that have a different perspective. They say that  everything you learn in school isn’t going to be helpful in the real world. Most of the time you learn all your skills once you get a job where they teach you what skills you need. It is really all about if the job would take the risk and hire you. There are many stories that there are people who have basic knowledge of programming are hired because they have experience in other fields that can be used to help out the team in different ways because you don’t just want a team of programmers that know one language but a team that can cover each others weaknesses. That’s why I think that this sections is very good because it gives a real perspective of what companies should look for in a person not what they can do now but how will hiring them affect us in the future and the capability of these people in the future.

From the blog CS@Worcester – The Road of CS by Henry_Tang_blog and used with permission of the author. All other rights reserved by the author.