Category Archives: CS@Worcester

JUnit Testing

Hello everyone,

For this week’s blog topic I will talk about JUnit, what it is, the importance of it, why it is used, the features that it offers and many more. First to start everything, what is even JUnit. So JUnit is an open source testing framework for Java and it allows programmers to write and then run automated tests. It is very useful to catch bugs early in the development when they are the least expensive to fix. Some of the key features that JUnit has to offer are its powerful testing abilities. It has easy and simple annotation, making writing down the tests even easier. It is intuitive and with just a few practices anyone can get the hang of it. Similar to the Happy Path Tests learnt with behavioral testing, JUnit encourages those normal operations first to be tested. It also supports negative cases and also boundary tests. The blog that I read was really useful as not only it explained what JUnit is but also recommended some good practices for new programmers. For example they advised to test one behavior at a time. This is important as you wanna test a single aspect of the code then move into the other parts of it. You should also use descriptive test names. This is helpful as a clear name can explain directly what you are trying to test for, eliminating confusion and possibly the chance of writing the same test twice. Another good advice given from the author of the blog is that you need to write tests which are independent. This means that different tests should not depend on each other’s results in order for them to run correctly. Lastly, you should always try to test the edge cases. Testing the boundary conditions of the code and also unexpected inputs. Your project should be ready for anything to handle even if an input does not make sense, it should be able to handle correctly and guide the user in the right path. The blog also gives a detailed tutorial on how to not only install JUnit, giving step by step instructions with examples included but also teaches us how to perform automated testing and even in the cloud. At the end of the blog it even offers a FAQ section, clearing any bit of confusion that readers might have. This is a great blog that I recommend everyone to read. It is useful for all ranges of programmers, from beginners to more experienced ones.

In conclusion JUnit Testing is a fundamental skill to learn if you wanna become a great Java Developer. It helps you verify how your code behaves, helps you catch and fix any bugs that might come up at any time of the project development time. Mastering JUnit will not only improve your code quality but also it will give you a boost of confidence when you make any changes, knowing that JUnit will be there for you to catch any bugs. 

Source:
https://testgrid.io/blog/junit-testing/

From the blog Elio's Blog by Elio Ngjelo and used with permission of the author. All other rights reserved by the author.

Static Testing

article https://www.browserstack.com/guide/static-software-testing-tools

This blog will focus on static testing. Static testing is the inspection of a code  program  without  execution. Static testing is an  early stage of creating a program, where a program is being developed, and code can be adjusted before the final product. A program’s files being reviewed before its release saves a company money, without the program being reworked. Review analysis and static analysis are two different methods for static testing. Informal review is a type of review analysis where team members provide code feedback, while static code analysis for static analysis uses software tools to detect coding errors. Static testing is used  multiple times in  coding a program. When a project is first assigned whether in a professional or academic setting, programmers need to understand the requirements of their projects. Usually after  instructions have been reviewed, coding would be the next step, but  static testing adds an extra step of  checking if  a program has the  documents used for  coding. Throughout the development of a program, a common practice is running the program, whether with unit testing or running a whole program, for a programmer  to know if the program is error free. Static testing at the coding stage can either be feedback from team members, or  different software tools such as Soot and checkstyle. BrowserStack Code Quality tool is one software tool for static testing. In my programming experience, I am used to  having to manually fix my errors. This past week, I was introduced to new visual studio code  software tools for coding errors. BrowserStack Code Quality tool is one tool of automated stack testing, where static testing is done through software tools. 

BrowserStack Code Quality has an assistant that recommends how large classes in a program can be split into smaller classes. BrowserStack Code Quality can be downloaded in either Android studio, Vscode, or Intellij, with a quick program scan with feedback. Another software tool is Checkstyle which only works with Java. Developers using Checkstyle  learn about  errors when writing code, compared to after a program has executed. Developers who are using Checkstyle can create coding  conditions, and  a program is checked for following defined coding conditions. Recently, I learned how to use PMD in Visual Studio Code. PMD  detects logical errors in code such as uninitialized variables, unused code. PMD has a copy paste detector that identifies duplicated code. PMD supports more than 10 different programming languages.

From the blog jonathan's computer journey by Jonathan Mujjumbi and used with permission of the author. All other rights reserved by the author.

Comprehending Program Logic with Control Flow Graphs

This week I am discussing a blog post titled, “Control Flow Graph In Software Testing” by Medium user Amaralisa. When I read through this post initially, it immediately clicked for me with what we have been studying in class with different path testing types which capture the logic similarly. The comparison between CFGs and a map used to explore the world or get from point A to point B is incredibly useful as it explains the need for having a guide to explain the many execution paths of the program. The writer made the topic easy to understand while still including the technical information that is required to apply these techniques moving forward.

This post helped me see the bigger picture in terms of the flow of a program and how the logic is truly working behind the code we write. It tied directly into what we’ve covered about testing strategies, especially white-box testing, which focuses on knowing the internal logic of the code. The connection between the CFG and how it helps test different code paths felt like a practical application of what we’ve been reading about in our course.

It also made me think about how often bugs or unexpected behavior aren’t because the output is flat-out wrong, but because a certain path the code takes wasn’t anticipated. Seeing how a Control Flow Graph can lay out those paths visually gives me a better sense of how to test and even write code more deliberately. It’s one thing to read through lines of code and think you understand what’s going on, but when you actually map it out, you might catch paths or branches you hadn’t considered before. I could definitely see this helping with debugging too—like, instead of blindly poking around trying to find what’s breaking, I can trace through the flow and pinpoint where things start to fall apart.

I also really liked that the blog didn’t try to overcomplicate anything. It stuck to the fundamentals but still gave enough technical depth that I felt like I could walk away and try it on my own. It gave me the confidence to try using CFGs as a tool not just during testing but also during planning, especially for more complex logic where things can easily go off track.

Moving forward, I am going to spend time practicing using CFGs as a part of my development process to ensure that I am taking advantage of tools that are designed to help. Whether it’s for assignments, personal projects, or even during team collaboration, I think having this extra layer of structure will help catch mistakes early and improve the quality of the final product. It feels like one of those concepts that seems small at first, but it shifts the way you approach programming altogether when applied properly.

From the blog cameronbaron.wordpress.com by cameronbaron and used with permission of the author. All other rights reserved by the author.

Spies and Their Role in Software Testing

As I was doing some at home research on stubs and mocking for one of my courses, I came across the idea of spies. Unlike stubs and mocks which allow for the program and tests to run while giving canned answers or being unfinished, spies perform a much needed but previously unfilled role.

Spies are used to ensure a function was called. It’s of course more in-depth than this but that’s it’s basic function.

On a deeper level a spy can not only tell if a call to function was made, but how many calls, what arguments were passed, and if a specific argument was passed to the function.

Abby Campbell has great examples of these in her blog, “Spies, Stubs, and Mocks: An Introduction to Testing Strategies” where she displays easy to understand code. I would definitely recommend taking a look at them, her blog also goes in depth on stubs and mocking.

When writing test cases, the addition of a spy to ensure a thorough case can’t be undersold. Imagine a simple test case that uses a stub, without the use of a spy you can’t be sure the correct function was called unless every function returns a different value which would be inefficient to set-up. By using a spy the function called is checked, the argument passed is checked, and the output can even be checked as well leaving little to no room for an error in the test case aside from human error.

With the addition of spies to our arsenal of tools for software testing we check off the need to find a reliable way of testing for ensuring correct function calls and arguments. I plan on carrying this new tool with me throughout the rest of my career. It allows for much more efficient, effective, and sound testing.

From the blog CS@Worcester – DPCS Blog by Daniel Parker and used with permission of the author. All other rights reserved by the author.

Behavioral Testing

Source: https://keploy.io/blog/community/understanding-different-types-of-behavioral-unit-tests

Behavioral unit tests validate how code units operate under certain conditions, allowing developers to ensure that the software/application is working as it should be. Behavioral unit tests focus on specific pieces of code. They help developers find bugs early and work based on real scenarios. They lead to improved code quality because this unit testing ensures that the software is up to the expectations of the user, and allows for easier refactoring. The key types of behavioral unit tests include happy path tests, negative tests, boundary tests, error handling tests, state transition tests, performance driven tests, and integration-friendly tests. The one that caught my attention was the performance-driven test. These tests validate performance under specified constraints, such as handling 10,000 queries. The test is run to ensure that performance remains acceptable under various loads. This test caught my attention because in my cloud computing class, I was loading files with millions of data entries, and the performance suffered, which highlights the importance of unit testing under conditions such as these.

The difference between functional and behavioral unit tests is that functional tests validate the system’s function overall, whereas behavioral tests focus on specific pieces of code to make sure that they behave as expected under various conditions. Behavioral unit tests should be run every time code is built to make sure changes in the code don’t cause problems. Tools that can be used for this kind of testing include JUnit/Mockito for Java, pytest for Python, and Jest for JavaScript. I chose this article because we use JUnit/Mockito in class and thought it’d be wise to expand my knowledge on other unit tests. It’s good to get reassurance that unit testing is important from all of the sources I’ve been learning from, because it is very apparent that many different scenarios can cause many different problems in regard to the production of code/software/applications.

From the blog CS@Worcester – Shawn In Tech by Shawn Budzinski and used with permission of the author. All other rights reserved by the author.

Automation Tools

This week in class, we did activities based on static testing. We analyzed code with Gradle And Gradle is not a new tool we’re just hearing about, as we’ve worked with it throughout the semester. And since it is an automation tool with a lot of cool features, I took a further look into automation in software development. I wanted to know what the best features were as well as potential drawbacks from using automation. I ended up finding a blog called “Automation in Software Development: Pros, Cons, and Tools.”

What Else can be Automated?

We’ve learned by now that software testing can be automated. But is that it? Absolutely not. There are some other important software processes that can be automated. One of them is CI/CD (Continuous Integration/Continuous Deployment). Automating continuous integration allows changes in code by multiple developers to be continuously integrated into a common version control repository throughout the day, after which tests are run automatically, making sure that newly written code is not interfering with existing codes. Automating continuous deployment results in integrated and tested code in the production phase being released automatically. Releases are quicker due to the automated deployment, and better because every new line of code is tested before even being integrated.

Automation can also be used to monitor and maintain code. There are automation tools that help analyze data, identify issues, and also provide notifications of a deployed software product. With automation, issues can even be resolved automatically. This is really helpful because it drastically reduces time and resources spent trying to correct errors.

Pros

Three of the largest benefits that come with automation are reduction in manual workload, lower development costs, and an increase in software quality. When tasks are automated, developers can use that now-free time to find ways to improve the software. This way, there is a better chance of the software having more advanced features, as well as customers being satisfied with the product. Many errors and defects of a deployed product come from human errors made during the development of the software. This is where automated testing comes in. Testing tools such as Gradle, JUnit, and Selenium were created for this purpose. Automated testing tools provide feedback on code at the snap of a finger compared how long manual testing might take, which as said before, leads to less time and money being spent to rectify errors. Reduced time and cost are two of the most key automation features that persuade businesses to use automation.

Cons

The challenges most faced when implementing automation tend to be: complexity of the tools, financial constraints, and human resistance. Automation tools can be tough for a corporation to set up and some automation tools require skills that a corporation’s employees might not have. So that means they have to be trained to use it, which means more time and money spent. Though the last paragraph mentioned how automation had lower costs, it can be quite expensive when first implemented. From purchasing the required motion control equipment to paying subscription and renewal fees, automation on a large scale seems to only be a realistic option for large companies. The return on investment might not be immediate either. There is also a concern that automation will soon replace human employees. This can create uncertainty and division in the workplace because employees might not know if they are at risk of being let go or not, so they might object to using automation.

Reference

https://www.orientsoftware.com/blog/automation-in-software-development/

From the blog CS@Worcester – Blog del William by William Cordor and used with permission of the author. All other rights reserved by the author.

Black Box VS White Box Testing

Hello! Today’s blog is about different types of testing methods. This is a topic that we learnt about very early on in my Software Quality Assurance and Testing class semester. I found an interesting article that expanded on what we already learnt in class, and even introduced a new type of testing; gray box testing. The article is linked here: Black box vs white box vs grey box testing

The definition of black box testing that we were given during class was testing based on the specifications of a component/system only. The definition of white box testing that we were given during class was testing based only on the internal structure of a component/system.

Now, this may make sense to some people, but many people who don’t deal with code may have trouble understanding exactly what this means. The article I read helped me understand it a lot more, and that is the main goal of this blog post as well.

Black box testing is exactly how it sounds. Imagine a box, and everything inside it is blacked out. You cannot see what is going on inside of the system, you can only see it from a user’s perspective.

With white box testing, you can actually see all of the internal systems and functions going on. The user can actually go into detail and analyze what is occurring.

Now that I summarized both of the main types of box testing, I want to introduce the idea of gray box testing as well. This testing is kind of a combination of both black box and white box testing. With gray box testing, you can access the code from a user’s perspective, but you also have access to some of what’s inside the code as well.

Out of all three types of testing, black box testing is the most user-friendly. Black box testing is pretty much designed for users who do not have much knowledge of the actual internal functions of the code, but they still need to be able to access the code. This would be like how websites have lots and lots of code that go behind them (even including the website you are reading this blog post on), but it is simplified for the user by having labels and buttons. I do not need to know how to read HTML to be able to use this website. Although this isn’t testing, that is one example so that you can kind of visualize what this type of testing is made of.

White box testing, on the other hand, requires the user to have extreme knowledge of the coding at hand. Since you can only access the inside with white box testing, the only way to be able to make use of it is if you know how to manipulate the code based on your own knowledge.

Overall, this was a very intriguing topic to me, which is why I decided to write about it on this blog. Both the classwork and the articles I read were very helpful and informative.

From the blog cs@worcester – Akshay's Blog by Akshay Ganesh and used with permission of the author. All other rights reserved by the author.

Sprint 2 Retrospective

Continuing from my last blog post for my CS-448 class, Sprint 2 has just finished. This Sprint I was assigned the issues of creating the code that mimics what the Guest Info system would handle (https://gitlab.com/LibreFoodPantry/client-solutions/theas-pantry/reportingsystem/reportingbackend/-/issues/94) in putting guest info into RabbitMQ and the issue to take the data from RabbitMQ and put it into the database (https://gitlab.com/LibreFoodPantry/client-solutions/theas-pantry/reportingsystem/reportingbackend/-/issues/96).

For what worked well this sprint, we as a team worked well and as we had planned to. On the backend, setting up the queue to put data into RabbitMQ was easy enough to do after reading the documentation on how the queues work which is what I wanted to improve last sprint so that I didn’t feel like I was unaware of how the systems worked. After that we started to set up the code that would take the data from RabbitMQ and put it into the database. The RabbitMQ for this part went as smoothly as the queue did, but when entering the data into the mongo database is where we ran into issues. The location of the data insert was in the dev container and so it was unable to connect to the database which is where I was stuck for most of the sprint. Another problem was that it wasn’t immediately obvious that the database connection was the problem which led me to looking for errors in places that there likely were none.

I think that for the next sprint a way to improve as a team would be two things. One would be to not assume that just because code exists for it, it doesn’t mean that it works in the context you need it to, and the other is that we should be more open to solutions to problems that may include adding entire new functionality that we didn’t specify was needed in planning but is needed to support another integral system. Individually I would want to improve by thinking harder about where my problems are and how they might affect things afterwards. I found that when I do ask questions about a particular hold up, after I solve that part, it leaves me in a place where I don’t have a direction because I didn’t think ahead after the roadblock. I have found that this really slows down my productivity and is overall an inefficient workflow.

This sprint, I think the apprenticeship pattern that is most applicable is the fifth pattern, perpetual learning. This is because this sprint I was learning new things about building a system, like the reporting system for Thea’s Pantry, only when there was an issue I was trying to fix, instead of always taking time to just learn whatever I can to build a base of knowledge that will help me when I come upon issues so I will already know what is going on. I also want to ask around for help from people who may know more about a certain system or function than me, including my group and the other groups.

From the blog CS@Worcester – Computer Science Blog by dzona1 and used with permission of the author. All other rights reserved by the author.

Understanding Mocking in Software Testing

Software testing is crucial to ensure that the system acts as expected. However, when dealing with complex systems, it can be challenging to test components in isolation since they rely on external systems like databases, APIs, or services. This is where mocking is used, a technique that employs test doubles. Mocking allows developers to simulate the behavior of real objects within a test environment, isolating the components they want to test. This blog post explains what mocking is, how it is applied in unit testing, the categories of test doubles, best practices, and more.

What is Mocking?
Mocking is the act of creating test doubles, which are copies of real objects that behave in pre-defined ways. These fake doubles stand in for actual objects or services and allow the developers to isolate chunks of code. Mocking also allows them to simulate edge cases, mistakes, or specific situations that could be hard to replicate in the real world. For instance, instead of conversing with a database, a developer can use a mock object that mimics database returns. This offers greater control of the testing environment, increases testing speed, and allows finding issues early.

Knowing Test Doubles
It is important to know test doubles to completely comprehend mocking. Test doubles are mock objects that replace actual components of the system for the purpose of testing. Test doubles share the same interface with the actual object but act in a controlled fashion. There are different types of test doubles:

Mocks: Mocks are pre-initialized objects that carry expectations for being called. Mocks are used to force particular interactions, i.e., function calls with specified arguments, to occur while the test runs. When interactions are  not up to expectations, then the test would fail.

Stubs: Stubs do not care about interactions. They simply provide pre-defined responses to method calls so that the test can just go ahead and not worry about the actual component behavior.

Fakes: They are more evolved test doubles with smaller implementations of real components. For example, an in-memory database simulating a live database can be used as a fake to speed up testing without relying on external systems.

Spies: Spies are similar to mocks but are employed to log interactions against an object. You can verify the spy after a test to ensure that the expected methods were invoked with the correct parameters. Unlike mocks, spies will not make the test fail if the interactions are unexpected.

The Role of Mocking in Unit Testing
Unit testing is testing individual pieces, such as functions or methods, in isolation. But most pieces rely on external services, such as databases or APIs. These dependencies add complexity, unpredictability, and outside factors that can get in the way of testing.

Mocking enables developers to test the unit under test in isolation by substituting external dependencies with controlled, fake objects. This ensures that any problems encountered during the test are a result of the code being tested, not the external systems it depends on.

Mocking also makes it easy to test edge cases and error conditions. For example, you can use a mock object to throw an exception or return a given value, so you can see how your code handles these situations. In addition, mocking makes tests faster because it avoids the overhead of invoking real systems like databases or APIs.

Mocking Frameworks: Mockito and Beyond
Various mocking libraries are utilized by programmers in order to craft and manipulate the mocks for unit testing. Among the most commonly used libraries used in the Java community is Mockito. Mockito makes it easy for one to write mock objects, specify their behavior, and confirm interactions in an easy-to-read manner.

Highlights of Mockito include:

Behavior Verification: One can assert that certain methods were called with the right arguments.
Stubbing: Mockito allows you to define return values for mock methods so that various scenarios can be tested.
Argument Matchers: It provides flexible argument matchers for verifying method calls with a range of values.
Other than Mockito, other libraries like JMock, EasyMock, and JUnit 5 can also be used. For Python developers, the unittest.mock module is utilized. In the.NET ecosystem, libraries like Moq and NSubstitute are commonly used. For JavaScript, Sinon.js is the go-to library for mocking, stubbing, and spying.

Best Practices in Mocking
As terrific as mocking is, there is a best-practice way of doing it and having meaningful, sustainable tests. Here are a few rules of thumb to bear in mind:

Mock Only What You Own: Mock only entities you own, such as classes or methods that you have created. Mocking third-party APIs or external dependencies will lead to brittle tests, which will be broken when outer dependencies change.

Keep Mocks Simple: Don’t overcomplicate mocks with too many configurations or behaviors. Simple mocks are more maintainable and understandable.

Avoid Over-Mocking: Over-mocking your tests can make them too implementation-focused. Mock only what’s required for the test, and use real objects when possible.

Assert Behavior, Not Implementation: Tests must assert the system’s behavior is right, not how the system implements the behavior. Focus on asserting the right methods with the right arguments are called, rather than making assertions about how the system works internally.

Use Mocks to Isolate Tests: Use mocks to isolate tests from slow or flaky external dependencies like databases or networks. This results in faster and more deterministic tests.

Clear Teardown and Setup: Ensure that the mocks are created before each test and destroyed thereafter. This results in tests that are repeatable and don’t produce any side effects.

Conclusion
Mocking is an immensely valuable software testing strategy that provides developers with a way to segregate and test separate components isolated from outside dependencies. Through the use of test doubles like mocks, stubs, fakes, and spies, programmers are able to fake out actual conditions, test on the boundary, and make their tests more reliable and quicker. Good practices must be followed, like mock only what you own, keep mocks as plain as possible, and assert behavior and not implementation. Applied in the right way, mocking is a great friend in creating robust, stable, and quality software.

Personal Reflection

I find mocking to be an interesting approach that enables specific and effective testing. In this class, Quality Assurance & Testing, I’ve gained insight into how crucial it is to isolate the units being tested in order to check their functionality in real-world settings. Precisely, I’ve understood how beneficial mocking can be in unit testing in order to enable the isolation of certain interactions and edge cases.

I also believe that, as developers, we tend to over-test or rely too heavily on mocks, especially when working with complex systems. Reflecting back on my own experience, I will keep in mind that getting the balance right, mocking when strictly required and testing behavior, not implementation, is the key to writing meaningful and sustainable tests. This approach helps us ensure that the code is useful and also adjustable when it encounters future changes, which is, after all, what any well-designed testing system is hoping for.

Reference:

What is Mocking? An Introduction to Test Doubles by GeeksforGeeks, Available at: https://www.geeksforgeeks.org/mocking-an-introduction-to-test-doubles/.

From the blog Maria Delia by Maria Delia and used with permission of the author. All other rights reserved by the author.

Sprint 2 Retrospective

In this post, I’ll be reflecting on our second sprint towards developing and implementing an Identity and Access Management system for Thea’s Pantry. Coming out of Sprint 1, we had a better idea of Keycloak in general, and we had some basic frameworks for a fake frontend and fake backend. Our sprint goal for Sprint 2 was to fully integrate these components, so that we could provide a proof of concept for the entire workflow, as opposed to just one component. We wanted to be able to force authentication on a frontend page via a Keycloak login page, and then we wanted to be able to store the resultant access token from that interaction so that we can perform authenticated actions without ever talking to Keycloak again.

Some of my personal work towards that goal was as follows:

GitLab

  • Documenting our low-level issues in GitLab and assigning them accordingly. I put additional focus/effort this sprint into properly linking related issues, blockers, and tracking various key information in comments, as opposed to just using issues as a task list. Epic

Backend

  • Refactor the backend endpoint to verify the signature of a JWT to ensure authenticity. Note – this was a great learning experience in better understanding how async and await work in JS. This issue took me way too long to resolve. Squash Commit

  • Further briefly modify the endpoint to pull specific custom data out of the generated JWT from Keycloak. Commit

Frontend

  • Configure Docker compose files and Git submodules to containerize all three repositories into the fake frontend to test the whole flow. Commit

  • Completely facelift/refactor/rework/reimplement the fake frontend to use Vue as a build framework to test our implementation in the same context as it will be used in production. Configure dependency and instantiation of Keycloak in the JS to handle redirect and access token storage and usage. Commits: 1 , 2

Something that worked particularly well this sprint was our focus on increased communication. We refactored our working agreement to address some of our shortcomings in communication and accountability, and I felt like this sprint was better for us around the board. We had a bit more direction this sprint, and we accomplished our goal exactly as we laid it out, barring 2 lines of code that we have to add that are just blocked right now.

That said, – at risk of contradicting myself – I feel like something that did not work as well, and that we can continue to improve, is also our communication. Though it was better this sprint, it still definitely felt at times like we were not a team, and instead like we each had our tasks that we would connect on once or twice a week in class meetings. Maybe this is fine, and to be honest it worked okay for the most part, but I feel like in an ideal world for me, I would have us all being very proactive and communicative about our issues, though I don’t know if this is a fair thing to aim for our team to improve, or if maybe I should reevaluate my expectations.

Something I could improve is my focus on defining roles and responsibilities for the general team dynamic, not just for issues. I felt like I focused on accountability for issues on GitLab, for example, but I also feel like I informally assumed the role of Scrum Master / Sprint Lead for this sprint, though we never really defined or said that. It seemed to work fine for us, but it is something I think I could have specified better, instead of just sort of assuming a leadership role.

The pattern I have chosen for this sprint is The Deep End. This is because one of the issues I spent the most time on during this sprint was implementing JWT signature verification. This should not have been a difficult issue, but I really have never worked with functions in js specifically, and for some reason I was caught in a loop of bad syntax and usage of things like const, async, and await. I had no idea what I was doing, and was so lost as to why my code was not working. It took a lot of reading and being lost for a while before finally realizing my error was not the libraries I was using, but just a lack of understanding regarding js. 

From the blog Mr. Lancer 987's Blog by Mr. Lancer 987 and used with permission of the author. All other rights reserved by the author.