Author Archives: dnatsis

Can Penetration Tests Actually Help Overcome the Cybersecurity Crisis

For my last blog of the semester i chose a blog on using penetration tests to strengthen cyber security, the article can be found here. In today’s time cyber attacks are becoming more and more common , and they can affect anyone from an individual to a large corporation. In the last few years alone we have seen a large number of corporations that have had to deal with large scale data leaks. Some of the ways that companies prepare for these attacks is by using vulnerability assessment, Penetration (or Pen) Testing, Security scanning, Risk Assessment, and Ethical Hacking in order to make there system more secure. In this article we are going to be focusing on Penetration Testing.

What is penetrations testing?well in this blog penetration testing is defined as follows:  a sanctioned triggered attack that is conducted on a computer system to assess security flaws, which can otherwise result in a data breach or intrusion within the system. Basically you implement a cyber security attack yourself in hopes of finding insecurity’s that might one day be exploited in a real cyber attack. This can be done both by manual testing or automated testing , and is also referred to as a white hat hack.

penetration testing can be broken down into a few different subcategories. First up we have targeted testing , which is done by the systems IT team and is witnessed by all in the system. Next up we have External testing which tests all the organizations servers that are external facing such as web servers , domain names, and firewalls. This type of testing is to see what kind of damage can be done to the system from the outside. The next type of penetration testing is internal testing , which tests what kind of damage can be done from inside the system with authorized access. The final type of testing is call blind testing and this is testing where the attacking team only has limited information to use for the attack , and this is supposed to simulate a real life attack where anything could happen.

“Gartner in its report mentions that by 2020, 40 percent of all managed security service (MSS) contracts will be bundled with other security services and broader IT outsourcing (ITO) projects, up from 20 percent today. ” As we can see the security field is growing greatly today as the threat of cyber attacks also increases. One way to combat this is with the use of penetration testing. Cyber security can be very expensive so it is important for a company to be able to use penetration tests to figure out where they should be putting there money.

In conclusion, i picked this article because it is a type of testing that i had not seen too much of through out the semester and so it peaked my interest. The thing i found the most interesting about the article is how similar this type of testing is to regular testing. Even though you are testing the security of a system , the testing is done much in the same way , by testing all the different boundaries of the system. Automation can help with this but manual testing is always needed to accompany it. For manual testing security ,many of the different testing options we have discussed would also work here like boundary value testing and edge testing. While viewing many different types of testing this semester, i noticed that they all share a lot of similarities, and they all seem to be made up of some combination of the types of testing we have learned this semester. I have build a solid testing foundation this semester which will allow me to continue learning in the field of testing as i move forward.

 

From the blog CS@Worcester – Dhimitris CS Blog by dnatsis and used with permission of the author. All other rights reserved by the author.

Over-reliance on testing considered harmful!

This week i chose a blog that talks about some of the harmful things about over-reliance on testing, the article can be found Here. Through out the semester i have read several blogs were they talk about the many positives of software testing. I chose this blog because i thought that it would be interesting to have another perspective on the topic.

The main point that this article is trying to get across is that while testing is great focusing extensively on testing is not the right strategy for ensuring high quality software. Testing specifications are great tool but not necessarily the best for finding bugs. The test cases specifications that are given to testers are often a great way to figure out what the client needs and to set up those boundaries and documentation. However this might not always lead to discovering more bugs in our code. In this article they give an example of two teams one which was given clear test specifications and the other which was told to just simply test and make the code the best they could. The team which did not have the testing specifications found more bugs than the team that did.

There is no perfect solution when it comes to software development, but there are somethings that have been shown to be effective.  This is old fashioned manual review techniques such as peer-reviews, inspections, code and design walk-throughs. Manual reviews have many positives, the first being that on top of finding actual bugs they can also find potential and latent bugs. Additionally these types of reviews can also be used to find design flaws and weak spots in your code that might be harder to find later on. The downside to manual reviews are that they are effort intensive which can be make it harder to implement under a time constraint. To combat this the blog recommends using static analyzers as part of the Development life cycle. These tools are able to find bugs that are hard to detect with testing or even manual reviews, by using modern and advanced techniques such as model checking, abstract interpretation, and program querying. One downside to static analyses software is that it is costly, however it usually has a high return on investment.

In conclusion when testing is combined with manual reviews, and a static analysis tool in the development life cycle this can lead to high quality software. I found this article to be very interesting all the way through. It touched on some of the same things we saw in class when we did our own code reviews. It was cool to see how a lot of the different things we have been doing through out the semester can come together to result in better software development. Lastly i found the static analyzer portion of the article to be very interesting, although i am not one hundred percent sure how they work still. The article did not go into great detail on the specifics with how static analyses software works , and this is something i would like to look into more in the future as the benefits seem to be great.

From the blog CS@Worcester – Dhimitris CS Blog by dnatsis and used with permission of the author. All other rights reserved by the author.

The Testing Show: Episode 43: Machine Learning

For this weeks blog i will be doing a podcast from The testing show on machine learning, which can be found here.  The podcast starts with a discussion of a recent tech story by facebook on their AI chat boxes developing their own language and having to be shut down. To understand this story the podcast goes into some detail about what machine learning is. The guest on the show Peter Varhol defined it as follows: machine learning works on the feedback principal, which is how we can produce better results by looking and comparing the results produced by the machine learning algorithm to the actual results and then feeding that back in to adjust the algorithm and produce incrementally
better results. After this they discuss what they think might of happened with the facebook chat bots, they went through multiple views such as being intrigued , to skeptical , to really questioning if the bots actually developed a language at all or just gibberish.

Although the podcast does not come to some big conclusion , the final view on the topic was the most thought provoking. The last view was disputing if the chat boxes really created their own language , and if so how? In the podcast the hosts were talking about how language is semantic and that it would be impossible right now for a computer to be able to understand semantics ,let alone creating a new language. The host theorizes that the chat bot created a language of gibberish to make up for the fact that it could not fully understand what was being asked of it semantically. I thought this was a really interesting point, i had never considered the semantics of language and shows how far computer scientists have to go for real artificial intelligence.

In the last part of the podcast we dive deeper into how machine learning works and some of the pros and cons. One pro is that we can set up a series of algorithms and iterative processes to achieve something we simply couldn’t do on our own. The example given here is the Facebook chat bot again, they talk about how even if Facebook released the source code for the bots, the algorithms would likely be to complicated to understand for almost everyone, and so it would be hard to verify the results for the chat bots. As maybe you can see from the example , our pro also becomes our con. This is because once we get what seems like a reasonable result from our machine learning algorithm, it becomes very hard to trace our answer back through the code.

The last part of machine learning covered in this podcast is the difference between supervised and unsupervised learning. Supervised learning is pretty straight forward we know the result from out training data and try to get a result close to our known result. Unsupervised learning is more difficult to explain, we don’t know the expected result and are just trying to optimize something. The example given for this is airline ticket sales online, the algorithm is built to maximize profits for the airline company. Roughly 3 times a day online ticket sale prices change, there is no exact formula to how the airliners change their prices, the algorithm is simply trying to optimize the amount of money made.

This was a very interesting listen on a topic which i don’t know a whole lot about, but seems to be the way of the future in tech. I picked this article because it was an interesting topic which comes up all the time in articles and discussion boards and i wanted more information, as well as how to test it. Although the podcast did not get around to talking about how machine learning is tested, there is a second part which was recently released which should go into more detail on that. This first section however gave a good overview on the topic and the issues that may arise, this gives you somethings to think about in terms of how testing will be affected compared to testing a standard algorithm. I enjoyed this article quite a bit and will be listening to the 2nd part to see how to go about testing Machine learning, i will be looking out to see what my expectations for testing using machine learning are before and after listening to part 2 of the podcast.

From the blog CS@Worcester – Dhimitris CS Blog by dnatsis and used with permission of the author. All other rights reserved by the author.

FEW HICCUPS

This week i read the blog few hiccups by Michael Bolton which can be found here. This article is about better describing the expectations you have for your code before testing which makes it easier to find problems in the code. Also in the article is identifying and applying oracles. I could not figure out what that meant at first but in an older Michael Bolton article i found the definition to be as follows.An oracle is a principle or mechanism by which we can tell if the software is working according to someone’s criteria; an oracle provides a right answer—according to somebody. That article which is called Testing without a map can be found here.

The article is actually a mnemonic in which the blog is based around. The mnemonic is split into two parts, FEW and HICCUPS. The HICCUPS part of the mnemonic refers to a list of  the authors oracle principles. HICCUPS is designed as follows:

  • History. We expect the present version of the system to be consistent with past versions of it.
  • Image. We expect the system to be consistent with an image that the organization wants to project, with its brand, or with its reputation.
  • Comparable Products. We expect the system to be consistent with systems that are in some way comparable. This includes other products in the same product line; competitive products, services, or systems; or products that are not in the same category but which process the same data; or alternative processes or algorithms.
  • Claims. We expect the system to be consistent with things important people say about it, whether in writing (references specifications, design documents, manuals, whiteboard sketches…) or in conversation (meetings, public announcements, lunchroom conversations…).
  • Users’ Desires. We believe that the system should be consistent with ideas about what reasonable users might want.
  • Product. We expect each element of the system (or product) to be consistent with comparable elements in the same system.
  • Purpose. We expect the system to be consistent with the explicit and implicit uses to which people might put it.
  • Statutes. We expect a system to be consistent with laws or regulations that are relevant to the product or its use.

Each one of these is a criteria that can be used to identify a problem in your code. We have a problem once we realize that the product or system is inconsistent with one or more of these principles. The FEW part of the mnemonic is defined as follows:

Explainability. We expect a system to be understandable to the degree that we can articulately explain its behaviour to ourselves and others.

World. We expect the product to be consistent with things that we know about or can observe in the world.

Familiarity. We expect the system to be inconsistent with patterns of familiar problems.

These last 3 oracle specifications were added later on updating the mnemonic from HICCUPS to FEW HICCUPS. The way to remember this given in the article is: When we’re testing, actively seeking problems in a product, it’s because we desire… FEW HICCUPPS.

I stumbled on to this blog by accident while looking at other blogs about testing consistency but i am glad that i did. This was a very interesting read and was not like other blogs that i have read. All the different specification oracles in the article were interesting and each seems to have it’s place. These specifications i think would make you a much more consistent tester by giving you a consistent guideline for finding problems in your code. The mnemonic was my favorite part of the article and not something that i have seen to many times in software testing. This seems like it would be very helpful in memorizing these oracle principles well into the future.This is something i am going to try and remember and use in my testing. Often times it is hard to pick were to start looking for problems in your code , or how to identify those problems. This mnemonic seems like a good way to fix that.

From the blog CS@Worcester – Dhimitris CS Blog by dnatsis and used with permission of the author. All other rights reserved by the author.

The New Normal for Software Development and Testing

Software and technology continues to grow at increasing speed and developers must grow with it. This week i read the article “The new normal for software development and testing” on the stickyminds blog. This article talks about the ever changing world of software development and testing, and the new normal for development and testing in this digital age of agility and continuous integration and deployment.

The article is broken up into a few a sections with the first one being “Development and testing are a team sport“. Development and is not just for developers and testing is not just for testers any more. The article talks about the increasing role of developers and testers working together through out the product lifecycle. This is also something we have touched on in class and seems to be a trend in the field. The next section “Data and analytics are now playing an increasing role” which talks about the importance of data and analytics in development and testing. Analytics can be used to affect the whole development lifecycle , from development and testing , to integrating and deployment.

TestDev thinking is pervasive” this next section was similar to the first one regarding team work. This section talks about traditional roles becoming more collaborative with testers advising the development team from early on and vice versa. Also talks about how those with experience are contributing to the automation of development , testing , and release for the whole project. “Testing in production is a frequent practice”  this next section talks about why today we must test in production. As systems grow much larger it is impossible to emulate the real world in test cases which leaves us with no choice but to test in production. As a result new features can be added to the code but not enabled, allowing the team to control the exposure to new or modified code.

Deeper skill sets are a requirement” in this section it explains that those in the development require more analytical knowledge and deeper technical skills are traditional roles become more blurred.Automation is pervasive” extensive automation yields consistency and repeat ability. This allows the team to focus on more technically advanced areas, automated testing dominates the teams and environments that are deploying frequent updates and changes. Near real-time measures and metrics are expected” with most of today’s lifecycle tools , we can produce real time information and data very quickly. The last section “The tolerance for risks may change” talks about quality engineering.  There are now more ways to avoid defects , or to apply small changes in batches to limited users and roll changes back if there are errors found.

I though that this article was a pretty interesting read although it might have been a little broad, i would have liked for them to include more details or talk longer about certain subjects. A lot of the things that were talked about in the article i feel like we have discussed in class, which i thought was nice to see. My big take away from this article was the blurring of traditional roles in the development fields and how people have to work together much closer now. This is something we talked about in class early on and it was interesting to read another perspective on the matter. In conclusion i think that i have been exposed to the information of this article already through the Computer science program but it was nice to get another view on the matter as well as positive reinforcement that i am on the right path.

 

From the blog CS@Worcester – Dhimitris CS Blog by dnatsis and used with permission of the author. All other rights reserved by the author.

7 Tips for Writing Better Unit Tests in Java

This week i chose a blog on unit testing which can be found here: 7-tips-writing-unit-tests-java. I chose this article because we are starting to look at J unit testing in the class and this article seems to give a good overview of some of the fundamentals on unit testing.  This article is designed to help you write better unit tests, and it does this by recommending 7 tips.

The first tip in the article is to use a framework for unit testing. The two frameworks that the article talks about are the two most popular, which are JUnit and TestNG. These frameworks make it much easier to set up and run tests, and makes tests easier to work with by allowing you to group tests, set parameters and many more options. These frameworks also support automated test execution by integrating with build tools like Maven and Gradle. They close this tip out by talking about another add on to Junit or TestNG which is called EasyMock which allows you to create mock objects to facilitate testing.

The second tip is Test Driven development, this is a process in which tests are written based on the requirements before any coding begins. The test will initially fail, the minimum amount of code is written to pass the test and the code is refactored until it is finally optimized. TDD leads to simple modular code that is easy to maintain, and speeds up the development time.  TDD however is not suited for very complicated design or applications that work with databases or GUI’s. The third tip is Measuring code coverage, generally the higher the percent of code that is covered the less likely you are to miss bugs. The article mentions some tools like Clover, Corbetura, JaCoCo, or Sonar which point out areas of code that are untested. This can help you develop tests to cover these areas. High code coverage does not however insure that the tests are perfectly working.

The fourth tip in the article is to Externalize the test data wherever possible. This is done so that test cases can be run for different date sets without having to change the source code. The article also gives code examples of how you would do this in both Junit and TestNG. The fifth tip is to use assertions instead of print statements. Assertions automatically indicate test results, while print statements can make code very cluttered and require manual intervention by the developer to verify the output printed. The article compares two test cases, one with a print statement and one with an assertEquals, the print statement test case will always pass because the result needs to be verified. The assert version will fail if the method returns a wrong result and does not require developer intervention.

The sixth tip is Build tests that have deterministic results. This tip talks about how not all methods have a deterministic result. The article gives an example of code for a very complex function in which a method calculates the time required for executing the complex function. In this case it would not make sense to test this because the output is variable. The seventh and final tip is to Test negative scenarios and borderline cases, in addition to positive scenarios. This includes testing both valid and invalid inputs , and inputs that are borderline as well as the extreme values of the inputs. This is similar to the testing that we have done in class such as Robust worst – case testing.

I chose this article because we are starting to look at unit testing and specifically JUnit testing and i thought it would be interesting to look at some of the basics of unit testing to familiarize myself with it. Some of the parts that stood out to me are the part on Test driven development and code coverage. I like the Test driven development because it seems like it would fit well into Object oriented design and allow for some sleek coding. As for the code coverage area i like how it included multiple plugins that allow you to test code coverage, these are programs that i have never used before and which seem like they would be of great help in both testing coding coverage and figuring out which sections of code you need to test next.  The last couple tips seem to be very similar to the type of testing we have been doing in class which gave some insight on how the types of testing we have been using can be used in Unit testing. The last thing that i like about the article was the code examples of tests that were included for both Junit and TestNG, which gave some insight on how certain tests could be run and also one some of the differences between Junit and TestNG.

In conclusion i enjoyed the article even though it was quite simple at times. It gave a good insight on Unit testing and the different tools that can be used to write better tests and to make your life as a tester much easier and enjoyable. My one complain is that the article does not go into very much detail in some of the sections. Specifically the sections on deterministic results and the section on assertions, i felt that both of these sections could of benefited from including greater detail. Both of these sections were not very long and only have one example, additional examples and greater detail would help get the ideas across much clearer.

 

 

 

 

 

From the blog CS@Worcester – Dhimitris CS Blog by dnatsis and used with permission of the author. All other rights reserved by the author.

Ministry Of Testing – The Philosopher and Tester with Israel Alvarez

Link: https://dojo.ministryoftesting.com/lessons/the-philosopher-and-tester-with-israel-alvarez

This podcast was recorded at TestBash Philadelphia last year, and it features Israel Alvarez who is a speaker at the event. The podcast begins with a little background information on the guest Israel Alvarez. Israel is relatively new to the Software Testing field only having 2 years of experience. His background is not in testing however , he has degrees in Philosophy and Mathematics , and this podcast focus mainly on how his Philosophy background shaped his career in testing.

Israel begins his talk by talking about how philosophy has helped him in his testing career. Israel calls himself a context driven tester , and philosophy helps with critical thinking which reflects on his context driven testing. As Israel continues talking about testing and philosophy, he talks about when he first started out and trying to figure out when to actually stop testing. They look at this philosophically asking the same question but from a philosophical approach , asking at which point do you stop think about and updating your beliefs. Although they do not give a concrete answer it is an interesting question comparing real life thinking and software testing thinking.

At this point the podcast goes a little bit off topic. Israel talks about how he go into philosophy, stating religion as being one of his main influences. Although i didn’t think this part was very relevant , Israel goes on to say how he believes religion is a packaging of ideas which i thought was an interesting look at it. A big chunk of the podcast talks about James Bach Socratic questioning , Which Israel studied at RST. Israel says the main thing he took away from this is the ability to articulate and defend his views. This type of questioning puts pressure on the Tester to defend his views on a specific talking , with somebody else asking them a series of questions doubting his views and the tester defending himself. This is used in questions such as if something is a bug or not which not everyone might agree on. This seems very helpful for flushing out bad ideas that are not very well thought out as well as helping you not get entrenched in specific views. The podcast ends on a bit of a tangent with them talking about automation in testing. Israel talks about how he does not think it is mandatory to use automation in testing but if can provide the team with a lot of confidence in their tests. Israel also talks about how he does not immediately start automating a system until he knows that system very well.

I chose this blog because recently i have been getting a lot more into philosophy which i feel helped my every day life, and this seemed like an interesting topic to see how it could effect programming and testing. From this podcast i learned some interesting ways to think about testing and philosophy. The biggest thing for me was the Socratic questioning by James Bach which sounded very interesting and helpful, in not only upgrading your ideas and elimination bad ideas, but also in how to articulate them and flush them out before you implement them. This can be used to avoid many simple mistakes one might make in testing. I have been reading The meditations by Marcus Aurelius and one thing i noticed about philosophy in Software development is that it can help with thinking logically , staying on task , not getting overwhelmed by the future or the program as a whole and tackling things one step at a time. I believe that all of these things will help you become a better software Developer. My only complaint is that the podcast did not dive super deep in to the philosophy in testing with more examples, but it was a very good introduction point. This podcast has definitely sparked an interest for me in this topic and i am going to be looking for more information on the topic in the future.

 

From the blog CS@Worcester – Dhimitris CS Blog by dnatsis and used with permission of the author. All other rights reserved by the author.

TESTING IN THE PUB EPISODE 37- MAKING BETTER TESTERS WITH KEITH KLAIN(week 3 blog)

For this blog i chose the podcast Making a better tester with Keith Klain from the testinginthepub podcast website. This is a website where there discuss various topics in the software testing field. I picked this podcast because it sounded interesting and seemed like a good starting point into the world of software testing. It discusses some of the downfalls in the testing field and areas to improve as well as an overview of the state of the industry today.

The podcasts starts with the guest of the show Keith Klain discussing one issue that seems to come up a lot in the testing field. This was that the testing teams objectives would not align well with the business side. He talks a lot about how often times when he consults for a company he will see that the testing team is effective, but simply that their goals to not match that of their business counterparts. He then goes on to state how important it is for the testing team to have good communication skills with the business team and to have their goals aligned to make testing as effective as possible.

The next section of the podcast builds where the previous section left off. This section talks about how one of the big things holding the industry back is that a lot of company’s still use very old ways of handling their testing. Most of this includes outsourcing all of their testing to independent firms and testing centers. Keith Klain states that as much as fifty percent of major financial firms and banks still outsource all their testing to large testing centers in other countries. This not only makes it harder to communicate and have everyone on the same page but also since this company’s treat this as an external service and  just get the results back they do not truly understand the data that they get from testing. The podcast also discusses that financial aspect of it ,stating that it is hard to completely phase these practices out because they are so deeply ingrained and these testing centers make a lot of money and employee a large number of people. Keith Klain also states that the reason a lot of these company’s are having a hard time updating their systems to agile or waterfall development is because of how heavily they rely on these large testing centers.

The last section of the podcast talks about the exposure and communication in the testing community. The podcast refers to a recent expo that the hosts had attended which was not testing expo but a more general tech expo. The podcast talks about how a lot of people seem the have misconceptions about what testers are and really do.Keith Klain talks about how the testing community needs to be a little more social in order to draw new people into the community.To finish up the podcast Keith Klain talks about the usefulness of experience reports and documenting some of the experiences you have in your testing job , and how this can be really helpful for figuring out which systems work in different contexts.

In conclusion i found this article to be very interesting and gave a good insight on where the industry is today and where it is going. To me it seemed to be plagued by a lot of the same issues as in other parts of tech, such as technology moving very fast and a lot of non tech companies having trouble keeping up. It also seems like the industry is held up by a lot of old infrastructure that is hard to remove because the amount of money that is tied up, which i would also say is a larger problem in the tech industry. One thing that interests me is seeing how the industry will grow, will the testing teams start working much closer with the development teams and how will this effect performance. If so will we then see a shift into developers writing their own tests and having the whole process be that much more unified, and if so how will this effect performance and the industry.There is a part 2 of the podcast which i am looking forward to listening to.

-Thank you for reading

-Dhimitris

 

 

 

From the blog CS@Worcester – Dhimitris CS Blog by dnatsis and used with permission of the author. All other rights reserved by the author.

Dhimitris CS 443 Introduction

Intro post for CS 443, looking forward to a great semester.

From the blog CS@Worcester – Dhimitris CS Blog by dnatsis and used with permission of the author. All other rights reserved by the author.

Week 5

This week i started looking at the ticket TRUNK -248 with my group. the ticket is dealing with going through the to-do’s and finding where the to-dos are asking for a Junit test. First i did a bit of research on junit tests so i could some background. After a quick google search i went to http://www.vogella.com/tutorials/JUnit/article.html which was a great in depth article on Junit. i didn’t focus on the using aspects too much since that’s not what my ticket is asking for however it did explain how the tests work and what too search for in the source code to track some of these down. After that i went to https://wiki.openmrs.org/display/docs/Step+by+Step+Installation+for+Developers which helped me setup the openmrs core in eclipse. I had to go to help marketplace and get a plugin for maven on eclipse. after installing the maven plugin i went to file import then chose import and then existing maven project and chose the core repo that i cloned from github. After the import finished which took a couple of minutes i finally had access to the OpenMRS source code. I started looking through some of the code and identifying some of the todo items that ask for junit tests. I wasnt sure if the ticket was still active or not but i did find a bunch of to-do items asking for junit tests so it seems like it still is. This following week me and my group hope to identify as many of these as possible to we can accomplish our ticket.

From the blog Dhimitris CS Blog » CS401 by dnatsis and used with permission of the author. All other rights reserved by the author.