Author Archives: dcafferky

Testing: Performance vs. Load vs. Stress

For this blog I chose an article called “Performance Testing vs. Load Testing vs. Stress Testing” on BlazeMeter’s website. They are a software testing platform for developers but keep an active blog for their community as well. I thought this article sounded interesting because I get to compare three different types of testing versus just learning one. I like the comparison aspect because it can help increase your situational awareness on why or when you might use a specific type of testing over another.

First mentioned is performance testing. This kind of testing takes into consideration things like responsiveness, stability, speed, and resource usage. Satisfactory performance testing varies greatly between customers so it’s important to establish the requirements early on. This type of testing can be used to check things like website or app performance and report back specific key performance indicators (KPIs).

Next is load testing. This type of testing is checking a system under a large concurrent usage or heavy volume. It’s a good indicator of how many users a system can handle. It is a good idea to check a variance of places in a system and not just the system as a whole. For example a website may do a load test specifically on their checkout page. Its suggested load testing is something that should be done frequently.

Last is stress testing. This type of testing checks the upper limits of a system by putting it under a heavy load and also monitors how it returns to normal. Checking for security issues or memory leaks can also be a part of this testing. Two common methods of stress testing are spike and soak. Respectively they are either a sudden high increase in load or a longer period of growth leading up to a high load. This type of testing is most popular for applications that can see high volume events like a website that sells concert tickets.

In conclusion, I can see the importance of all the mentioned types of testing. I would say that performance testing would be your foundation and most important. The other types of testing won’t matter if you don’t have an efficient application under normal circumstances. I would then say that load testing is more important than stress testing. While they seem to be very similar I’d care more that my application can handle a heavy volume over knowing what my system breaking point might be under a heavy spike. After finishing the article I am more interested now in how these types of testing are performed and what kind of tools developers use. This will probably be the discussion of an upcoming blog.

From the blog CS@Worcester – Software Development Blog by dcafferky and used with permission of the author. All other rights reserved by the author.

Intro to Decorator Pattern

For this blog I chose an article on DZone which is basically an online community for software development. I discovered them through the UML exercise in class because of their “refcard” we used as a guide. For the article, I went with “Decorator Pattern Tutorial with Java Examples” because I wanted to learn a design pattern I’d never heard of. Also, with Java examples it would be easiest to follow along for me.

To get started, it’s important to know that the decorator pattern allows class behavior to be extended dynamically at run-time. This means attributes can be added to Objects dynamically which the author relates to the concept of a wrapper object. The decorator pattern is more broadly considered a structural pattern which encompasses many different patterns that focus on identifying a simple way of implementing relationships between entities.

One of the principles of the decorator design is that classes should be open for extension but closed for modification. Two main instances which the decorator pattern is useful are 1) when object behavior should be dynamically modifiable or 2) when concrete implementations should not be coupled to behavior. While sub classing can achieve the same effect, decorator can decrease maintenance by preventing too much sub classing. The author suggests to only use the open/closed principle where code is least likely to change to avoid making the code too abstract and complex.

A good example of the decorator pattern would be a company’s email system. If an email is generated and sent out from a company employee the decorator pattern could account for different scenarios. If the email is sent internally nothing occurs, but if the email is sent externally the system could “decorate” the email with a copyright or confidential statement attached to it.

After reading about the decorator pattern I can see the benefits but I don’ t think I have come across a project thus far that I would have really been able to put this to use. After a quick google search it looks like Java’s I/O streams implementation is a good example for learning more about the decorator pattern. I will probably look more into this as I can’t see how this article really helped me much moving forward. I think after I play around with it I’ll be able to better understand how I could apply it to my own projects. This article was good for a general explanation but not for learning the actual implementation.

From the blog CS@Worcester – Software Development Blog by dcafferky and used with permission of the author. All other rights reserved by the author.

Intro To Incremental Testing

For this week’s blog I wanted to keep to the theme of learning a new type of software testing. I discovered softwaretestinghelp.com from our classes Slack channel as I know a few people recommended it. For my article I chose “What is Incremental Testing: Detailed Explanation With Examples” because I have not learned this type of testing and it was one of the more current articles posted.

While I can infer things about incremental testing based on its name, this article covered it in detail. To start, incremental testing is one approach of integration testing and combines modular testing with it as well. Basically each module will be tested on its own and then integrated one by one to make sure that each module that is added interacts how it should. This is where the term incremental comes from.  Once each module has been integrated, the application or system is considered built. All modules should then be tested together to ensure smooth interaction and data flow.  A couple advantages for this type of testing are that defects can be found earlier and defects can be easier to target based on what module was added last. This can also reduce the cost and time of rework if modules were not added and tested incrementally.

There a few different methodologies in which incremental testing can be done. Three that are discussed are top down, bottom up, and sandwich testing. While there are different methodologies, they effectively use the same principles. They all utilize layered testing, a hierarchical design, and integrate modules one by one.

After reading about incremental testing it seems like a really good strategy to implement. I like the idea of testing a system piece by piece to avoid a possible large amount of rework. It also adds confidence as you work through a project that everything is working together correctly. I think this allows developers to give a more accurate and reliable time estimate for a project’s completion as well. My reason for this is that if defects are found earlier per each increment you can adjust and communicate time tables more effectively. The only downside I can see with incremental testing is repetitiveness. With each module that gets added, there will still need to be testing done for the modules that were already added and tested to make sure the current system still works correctly. This testing is still useful however I can see how it may increase the amount of work to be done up front.  In conclusion I’d like to try incremental testing on an upcoming project to weigh the advantages and disadvantages first hand.

From the blog CS@Worcester – Software Development Blog by dcafferky and used with permission of the author. All other rights reserved by the author.

Software Design Strategies

For this blog I chose an article on Tutorials Point called “Software Design Strategies”. I chose this because I have used tutorialspoint.com looking things up for projects in the past and I found their content very helpful. I chose this write up particularly because I want to get a foundation of design strategies before I continue to dive into specific ones.

To begin, it’s important to understand what software design is in its most basic form. It is the process of taking user requirements and then planning and implementing them with the optimal solution.  The design aspect is really HOW you choose to implement. There are a few major variants considered in software design. They include structured design, object oriented, and function oriented. Structured design focuses most on designing a solution which gives better understanding to how the problem can be solved. The primary problem is broken down into smaller problems which are solved with solution modules and then these modules are aligned in a hierarchy to communicate and solve the larger problem. Key traits of an effective structure design are high cohesion and low coupling arrangements. Function oriented design is considered to be a system which contains smaller sub-systems called functions. Every function should complete a task which is important to the system as a whole. Dividing functions up gives a means for abstraction because the functions can hide data and the underlying operation as they communicate with each other. When using function oriented design, usually you would use a data flow diagram to map out the main system as well as break it down at the function level. Object oriented design focuses on entities (called objects) and their characteristics. Objects are defined in classes which lay out attributes and methods giving the object its functionality. Other key traits that are important to object oriented design include inheritance, polymorphism, and encapsulation. When using this strategy it is important to define a class hierarchy. This determines relationships among the classes.

After reading this article I feel that I have a better understanding of the foundations of software design. The three strategies that were mentioned are important to understand because it gives options to how you are able to meet software’s requirements.  It seemed like the three strategies were discussed in order from least complex to most. If I had a difficult problem or set of requirements to meet, object oriented design would be my choice. If I had a problem which was mainly input and output I would go with the structured design as long as I did not need to hide implementation. In conclusion I think it’s important to make a conscious decision to what design method you are going to use from the beginning. Otherwise when you start implementing requirements, you may find your design to be less than optimal. As we move through the semester I plan to try and identify the base level of design for the code that we will work with.

From the blog CS@Worcester – Software Development Blog by dcafferky and used with permission of the author. All other rights reserved by the author.

Intro To Interoperability Testing

As the Fall semester kicks off and I begin to dive into the curriculum of Software QA and Testing I quickly realized how little I actually know and how in depth testing really needs to be. That being said I want to use my blog posts as an opportunity to learn about different types of testing. That lead me to go with the article “A Simple Guide to Interoperability Testing” written by the team over at ThinkSys. I really had no idea what Interoperability Testing was before I read this so I decided to learn. I also liked that it was written in 2017. I know a lot of software topics have remained the same for decades but there’s something refreshing about reading something that has been published more recently.

First off, interoperability testing is a type of non-functional testing. This means it is testing the way a system operates and the readiness of the system. In the most general sense interoperability is how well a system interacts with other systems and applications. A great example provided is a banking application system (seen below) where a user is transferring money. There is data/info exchange on both sides of the transfer without an interruption of functioning to finish the transaction.

Banking Application Interoperability

The testing of the functionality that takes place to allow a fluent interaction between systems is what interoperability testing really is. It ensures end to end functionality between systems based on protocols and standards. The article covers 5 steps to perform this type of testing. They consist of:

  1. Planning/Strategy: Understand each application that will be interacting in the network
  2.  Mapping/Implementing: Each requirement should have appropriate test cases associated. All test plans and test cases are developed.
  3. Execution: Running all test cases and logging and correcting defects. Also retesting and regression testing after patches have been applied.
  4. Evaluate: Determine what the test results mean and ensure you have complete coverage of requirements.
  5. Review: Document and review the testing approach, outline all test cases and practices so further testing can improve on what has been done.

Some of the challenges that can arise with interoperability testing include the wide range of interactions that can take place between systems, testing multiple system environments can be difficult, and root causes of defects can be harder to track with multiple systems involved.

After reading this article I can definitely see the complexity in interoperability testing. Taking an iterative approach seems like it would be the best method because you can use your results each iteration to create better test cases and more coverage. Trying to tackle all the test cases in one execution would be overwhelming and it would be difficult to have close to full coverage. It seems like interoperability testing would need to take place anytime an application is updated as well to make sure the systems that interact with it are still compatible. Now that I have a general understanding of interoperability testing I am certain it will play a role in future jobs and work I do. With today’s technology, it is rare to have a complete standalone application that doesn’t interact with additional systems.

In conclusion I enjoyed this article because it was simple, to the point, and I was able to retain the information.

From the blog CS@Worcester – Software Development Blog by dcafferky and used with permission of the author. All other rights reserved by the author.

Summary of Agile Modeling

After being exposed to agile development last semester, more specifically Scrum, I thought it would be interesting to continue my learning in the topic. Seeing as I am now studying software design principles, agile modeling seems like a great place to start.

This is why I chose an article titled “An Introduction to Agile Modeling” which gives a breakdown on what Agile Modeling (AM) really is and how to apply it to software development using the best practices. Basically AM is a collection of values, principles, and practices which can be applied to any development project in an effective and lightweight manner. It is meant to be applied to base processes like Rational Unified Process or Extreme Programming and frameworks like Scrum. The idea is that AM can be tailored to any developer’s needs.

The values AM is based on:

Communication: All project stakeholders need to be able to openly converse throughout a projects life-cycle.

Simplicity: Always focus and aim for the simplest solution that meets all requirements and solves all problems.

Feedback: Get feedback often and early on the work that is done to course correct and create project transparency.

Courage: Make and stick to your decisions.

Humility: You won’t know everything, others can add value to the project.

The principles AM is based on:

The importance of assuming simplicity. The simplest solution is always best.

Embrace change when working because requirements WILL change.

Strive for rapid feedback to enable the agility of incremental changes.

Model with purpose, if you don’t why then you shouldn’t be doing it.

There’s more than one way to model and be correct. Content > Representation

Open communication is crucial for teamwork.

Always focus on the quality of work. The product owners should be proud and refactoring down the road should be easy.

After reading this article I think the above values and principles make a solid foundation when it comes to software modeling. While I sometimes think these things are obvious, it’s important to conduct yourself in accordance with a set of guidelines and code of conduct. Most of the ideas behind Agile Modeling are similar to the books I’ve read by Robert Martin aka Uncle Bob. I think the most important part of AM is the constant communication between all stakeholders. Requirements are always bound to shift and change priority so feedback and iterative changes are crucial for AM to be effective. The only thing I did not agree with in the article was that content is more important than representation. I think you could have great content but if you are not effective at representing it then what’s the point? In my opinion they are equals.

In conclusion I plan to read more into Agile Modeling and how I can apply it to my own projects. Moving forward I am going to tailor my software modeling to follow the best practices of AM as I believe it will create a strong foundation of design principles.

The following figure gives an idea of Agile Model Driven Development (AMDD).

From the blog CS@Worcester – Software Development Blog by dcafferky and used with permission of the author. All other rights reserved by the author.

Intro

Hi, this is my test post for CS-343 and CS-443.

From the blog CS@Worcester – Software Development Blog by dcafferky and used with permission of the author. All other rights reserved by the author.

Sprint 6 Reflection

In Sprint 6 even though we were close to fixing the issue we had assigned ourselves, we chose not to take on additional tasks due to it being the final sprint. We continued to work on APTS-296 and were able to run a fixed instance of the issue on our computers. Unfortunately, we realized that our solution was flawed. When we were ready to submit a pull request we realized that the developers were not going to accept changes we had made to the npm module directory. This was not part of the original repo and cannot be cloned from GitHub by other users. Once we knew we were going to need to some guidance we reached out to the developers on Slack.  Basically they said the property that needed to be updated was part of the schema that gets implemented when users run their npm install commands. Therefore, we were directed to reach out on JIRA and mention a particular developer to help fix the issue. Once we did this the developer implemented the change we requested and testing feedback was posted on JIRA. Within a couple of days the ticket was successfully closed out. While we may not have been able to get a pull request submitted we did directly track this issue and ensure it was solved. One lesson to take away would definitely be to ask questions early on. Instead of struggling to look for a solution and work on the issue for a whole sprint, we could have reached out to the developers and found the full solution was beyond our capabilities. If we were to have more sprints I believe we would adjust our velocity to taking on 2-3 issues in one sprint. We’ve gotten pretty comfortable navigating the code base so we would be able to track the location of issues quicker. We would also communicate with the developers more to ensure speedier solutions. From a personal perspective I don’t think I utilized the knowledge and help from the AMPATH developers enough. I saw it as bothersome to them and tried to avoid it at all cost. In reality they were quick to respond, knew exactly what to do, and usually it wasn’t all that difficult. Lesson learned; don’t be afraid to ask for help.

From the blog CS@Worcester – Software Development Blog by dcafferky and used with permission of the author. All other rights reserved by the author.

The Software Craftsman: Chapters 15 & 16

Chapter 15 covers the topic of pragmatic craftsmanship and what the term actually means. He starts off by remind the reader that cheap and fast code that lacks quality will always become expensive and slow to change over time. Quality is always expected at the end result and the author claims that quality does not have to be compromised by a team of software craftsmen.  By practicing Extreme Programming and Agile methods testing, integration, and deployment are done daily with working software every step of the way. The initial learning curve is what causes teams to stray from this way of development. Instead companies need to hire craftsmen so that the best quality and development environment can be achieved and passed on to junior developers. The author goes on to talk about other development staples like TDD and refactoring and implementing them correctly. He also advises that when working closely with a business the team should be able to visualize the businesses goals quickly and get feedback as to always pleasing the customer. Projects are never about an individual and code should always be kept small, testable, and easy to understand while constantly doing the job intended. Two main rules the author mentions regarding program design are to minimize duplication and maximize clarity. The author concludes with saying that a craftsman masters their practice and provides quality at a good value. Quality is always expected regardless of the situation and to master the practices to deliver quality, one must be pragmatic.

After reading chapter 15 there is definitely some good advice however the chapters are starting to become repetitive. While the main message was delivering proper valued quality, many suggestions and topics such as TDD and refactoring are being repeated. The best lesson I took away was to never have the mind set for a project of quick and cheap. Customers always expect quality and by practicing the right methodologies you can always guarantee it. Aside from that the two design principles of minimizing duplication and maximizing clarity should be kept in mind for any programmer.

Chapter 16 covers the topic of a career as a software craftsman. The author outlines what being a software craftsman really means. Above all you need passion. The world is becoming more and more reliant on software and craftsmen need to be able to solve problems and be curious aside from just writing code. The author then suggests climbing the ladder through the progression of a career. Those who switch to managerial roles have switched ladders and should view it as a career change. Taking a job as a developer should be seen as more than just an occupation and really a lifestyle commitment. Each job should align with career goals and progression. As the author says a craftsman is committed to excellence and the role they are playing in the evolution of society.

After reading chapter 16, it was mainly a motivating summary of all the lessons learned throughout the book. The only part I still don’t fully agree with is limiting your job search to only a few companies. In reality people have families to provide for and bills to pay. While I do not condone staying in a job you are unhappy with, sometimes you just can’t land the dream job and may need to settle or stay where you are for a little longer. This especially applies to junior developers who are still just trying to gain exposure and experience in the field. In conclusion I think this book was well written and provided some great guidance for a software developer at any stage in their career. That being said I did find many chapters to be repetitive and many lessons to be similar to ones in Clean Coder.

From the blog CS@Worcester – Software Development Blog by dcafferky and used with permission of the author. All other rights reserved by the author.

The Software Craftsman: Chapters 13 & 14

Chapter 13 covers the topic of creating a culture of learning in the workplace. The author insists the culture starts with the senior personnel who are viewed as role models. If they are excited about being at work, it will rub off on other team members. The author also advises to give developers the freedom to learn. It leads to happy developers, more innovation, and an overall better work environment. He then goes on to a lengthy list of topics developers can do to create this better work environment:

Start a book club

Have tech lunches

Organize group discussions

Switch Projects for “x” time

Conduct group code review

Encourage pet-projects

The author then goes on to give advice to developers trying to create a culture of learning. He reminds the reader that you can’t change everyone with your enthusiasm and that’s okay. If you can help other developers re-discover their passion for what they do, it will still have a positive impact on the workplace. Some of his advice is as follows:

Be an example

Focus on those who care

Don’t force participation

Avoid consensus delays

Don’t ask for authorization

Don’t complicate/Make excuses

Establish a rhythm

The author concludes by saying that the culture of learning is everyone’s job. It can also be cheap and easy as passionate developers will naturally create this kind of environment.

After reading chapter 13 I think there’s a lot of really good advice to take away. I definitely agree with the overall message that having a culture of learning is imperative to any development team. Without it, developer’s skills get outdated and team innovation will be on the decline. Software development is an ever changing field and creating a culture of learning in the workplace is one way to keep up. There were only a couple of suggestions from the author that I didn’t fully agree with. The first was encouraging pet projects in the work place. While I think it’d be okay to have a lunch discussion with a co-worker about a side project it seems a little unprofessional to me to dedicate any of the work day towards it. If it’s not a topic that directly benefits your skills at work I would especially save it for outside work. I wouldn’t want to give any impression my side work was more important than my company’s. Secondly I didn’t agree with the advice to not ask for authorization. If you’re going to conduct training in the workplace it’s always a safe bet to run it by the manager. Even if you don’t formally ask permission, maybe just invite the manager to stop by or participate. In summary, I think the author’s advice and suggestions promoting a culture of learning are really good.

Chapter 14 covers the topic of driving technical changes. More specifically how to convince skeptics to be more open to new ideas. First the author breaks down these skeptics into types that someone may encounter. They include types like “The Uninformed”, “The Irrational”, and “The Boss.” Once you’ve identified the type/s you may need to convince you need to be prepared for technical conversation and heated debates. To actually get the ball rolling towards a change you see fit the author provides an outline: establish trust, gain expertise, lead by example, choose your battles, iterate, inspect, and adapt. He also suggests not letting fear and incompetence get in the way of doing what is right to implement a positive change. As far as getting permission from the boss he says not to worry about that. They don’t care about the low level implementation, they just want solid results. As far as convincing a team to adopt a new idea, you should be proficient and able to teach the skeptics. If you don’t have a grasp on the subject it will come off as difficult and a waste of time. The author concludes with the idea that implementing changes is the responsibility of a true software craftsman. Whatever is best for the project and customer is what should be implemented. In order to adapt to these changes, a craftsman needs to know how to communicate with everyone and show the real value in the change they want to implement.

After reading chapter 14 there are definitely things to keep in mind for a future career in software development. I do think this chapter is aimed at much more experienced developers as it would be extremely difficult for a junior developer to convince a team to make a technical change without the experience to back up their reasoning. That being said it is of the upmost importance to satisfy the customers and anyone who can see a beneficial change to make that happen should try to implement it. I did not agree with the idea of not informing the boss once again of a technical decision. A good manager will care about the team and a good team will be transparent to their manager especially if a technical change is made. As I mentioned for chapter 13, I don’t see it being necessary to ask permission. Something like “We are implementing TDD because we can benefit as a team in this way” should suffice. Aside from that, the chapter was useful and reminds the reader of responsibility and accountability.

From the blog CS@Worcester – Software Development Blog by dcafferky and used with permission of the author. All other rights reserved by the author.