Monthly Archives: December 2018

The Smartest Fly (A)L(I)VE

This week as I was looking for articles, there was one particular article that caught my eye almost immediately. It is called, “Artificial fly brain can tell who’s who” I knew that I had to write about this article. This article was posted in mid October (October 18th to be exact), so it is actually fairly recent.

This article talks about how researchers at the University of Guelph and the University of Toronto have built a neural network that almost perfectly matches that of a fruit fly’s visual system, and it can even tell the difference between other flies and even re-identify them. They obtained this by combining the expertise knowledge of the biology of the common fruit fly and machine learning to produce a biologically-based algorithm.

The article then talks more about the biology of the fruit fly, and talks more about the computer program in the following paragraph. The article then concludes by talking more about the future of neural networks and AI.

I throughly enjoyed this article, and I truly believe that it was well worth the read, and I encourage others to also find the time to read this article. The part that I found the most interesting was that using this neural-network-machine-learning-based program, this “artificial fly” was able to identify other flies with a score of .75 or about 75%. They tested this by recording the fly for two whole days and then testing the program on the third day to see if it was in fact able to identify it. They also tested just the algorithm without the fly biology constraints, and this scored a .85 and .83. This is only slightly better than the program which is very good results. They also went on to compare it to human fly biologists, and they only scored a .08. Lastly, on top of all of these comparisons, they included that random chance would only score a .05. This is unbelievable in my opinion. The fact that a computer program scored that much higher than a human is truly insane. I think that this research is a huge step in the right direction for AI. After reading this article, I am much more interested in AI, and plan to continue to research the topic more.

Article URL: https://www.sciencedaily.com/releases/2018/10/181025142010.htm

From the blog CS@Worcester – My Life in Comp Sci by Tyler Rego and used with permission of the author. All other rights reserved by the author.

Blog 4 CS-343

Link To Article

In this article, the author is talking about steps to take to actually reduce software defects in programs before release through several steps. The first step seems kind of obvious for a goal but nonetheless it is to stop believing that you can’t put out a program that doesn’t have defects. Defects are nothing more than simple mistakes that could have been avoided but were not, of which these bugs could be trivial or system breaking. So instead of believing that these programs are bound to release with problems and defects, aim to write your programs without any defects. Another thing the author points out is if you are given code to test, not to simply point the finger at the developer of the code being lazy. The people writing the code are not simply trying to skimp on their end to save their time. Instead they are often overworked and simply could have missed something. It is also explained in the article that often the first thing to be done for reducing defects is to develop unit test coverage. However, although this unit test coverage could cover most of the program, it could be ineffective. Something the author points out in his article is the concept of “bug bash” which doesn’t seem like in the end it would be very productive. It is the idea where a team takes the software and tries to break it, and then repair it. While this may provide more knowledge of the software, it seems this kind of practice would be a focusing too much on the knowledge of the software instead of getting a product out to clients. Finally, some of the things that are pointed out in the article is to try to avoid tendencies that will lead to errors in the first place. These sort of tendencies could be things such as when copy and pasting from files, missing some small statement, or could be having files that may be too complex, leading to inattention errors.  

From the blog CS@Worcester – James' Blog by jdenesha and used with permission of the author. All other rights reserved by the author.

Blog 4 CS-443

Link to Article

This article is about a software tester who is trying to help those new to the field to avoid some mistakes that he has experienced in his career. Some of these mistakes and pitfalls that the author highlights are things like running out of test ideas for a project, missing simple little mistakes, self doubt about issues in the program, and the priority of what to test in a program. For each of these mistakes that an inexperienced tester can run into, the author gives a few examples of what someone can do to in such a situation. One thing the author mentions is when you may be lost on what the testing goal is to ask plenty of relevant questions to help you understand. It is better to admit you’re not clear about something in the project and ask for clarification about it, then to be ignorant of that actual goal. I agree with the author as well on his point about a testers self doubt about a bug they think they have found but are uncertain. In this situation, the author suggests a practice I personally use as well; take a break from the project and come back with a “fresh set of eyes”. This break from the testing gives you time to clear your mind and come back to the project with a new focus. One thing that I learned from this author was the concept of BCA(Brute Cause Analysis) in which two people work together, with one person brainstorming possible bugs and the other thinking about the different ways those bugs could manifest. This can be a very good idea because it is always helpful to be able to see a problem(or bug) from another person’s perspective. One final thing that the author suggests is to trust in your gut when you think you may have found a bug but are not sure. In the worst case you simply find out that it could have been something you did wrong, or in the better case your gut could have payed off.

From the blog CS@Worcester – James' Blog by jdenesha and used with permission of the author. All other rights reserved by the author.

Some simple usability test methods

https://www.infoworld.com/article/3290253/application-development/6-usability-testing-methods-that-will-improve-your-software.html

A/B Testing

A/B testing is a type of testing where there are two different application designs, generally websites, that are tested over time. Data is collected on their performance with some sort of goal in mind such as product sales. If analytics show that one of the designs was better at achieving that goal then one is declared superior and is chosen as the design to go with. This can lead to even more A/B testing against other designs until the development team comes to a decision. The analytics for the test is often times done through third-party tools rather than an in-house solution. An A/B test can be as specific as you want, to the point where you only change a single small element between the two designs or they can be made completely different. It is often best to define a specific problem keeping you from your goal that you want to investigate such as users failing to complete a transaction. A/B testing is extremely clear-cut at providing measurement data for design decisions but can take a large amount of time to conduct the tests and produce the data.

Design Prototype Testing

Design prototype testing can be used to test a complete workflow in a wireframe or fully designed portion of a product before it goes into development. A UX/UI designer will create the prototype and the test will help fix usability issues before the project goes any further. It is important to define a budget for the project as well as the specific goal. Then you need to choose a prototyping tool such as Axure. Third, you will need to choose a measuring tool to gather analytics from the user such as Loop11. It is important that the development team is familiar with such tools to make the test worth the time and work investment put into the test.

Formative Usability Testing

Formative Usability Testing is a type of early-stage testing that focuses more on quality assurance. This test should occur before the first release of the developed product so that it can become the baseline for future tests. With formative usability testing the product will go through a beta test where groups perform the defined usability tests. Test cases are usually written down in order to inform the participants through specific goals in order to get meaningful results. Afterwards it is important to analyze the feedback and make revisions to the product before the official launch. This can be repeated in order to improve the product over time.

 

From the blog CS-443 – Timothy Montague Blog by Timothy Montague and used with permission of the author. All other rights reserved by the author.

Measuring Similarity

In developing a project for my data mining class, I kept asking myself how I could objectively measure the similarity between two objects. These objects happened to be Magic: The Gathering trading cards, and I was attempting to build a card recommending system not unlike those used by YouTube or Netflix to recommend videos, TV shows, and movies. Essentially I wanted the user to be able to pick any card that exists, and have the system respond with a short list of the most similar cards. I knew the hard part was going to be determining what makes cards “similar.” The most important attributes on the cards are entirely text, and considering that there have been almost 20,000 unique cards printed since 1993, I didn’t even know where to start. All I had was a CSV file of every card ever printed with each card’s text attributes such as card name, card type (creature, sorcery, etc), and the actual effects and rules of the card.

This is where machine learning engineer Christian S. Perone’s blog Terra Incognita comes in to save the day. He explains the use of tf-idf to convert a textual representation of data (like my Magic cards) to a vector space model: an objective, algebraic model of a card’s attributes. The tf-idf of a document (trading card) is the product of two statistics, the term frequency (tf) and the inverse document frequency (idf). The term frequency measures how often a term appears on a specific card, while the inverse document frequency essentially measures how important the word is to that card; if a word is frequently used on a card but appears on most cards, it doesn’t provide us with much information. Now that we know about this tf-idf statistic, we can construct a matrix that contains a tf-idf measure (i.e. how important a word is) for every word on every card.

Now that we have a numerical measure of importance for every word on every card, we need to find cards with similar tf-idf vectors to the card selected by the user. Perone explains the popular cosine similarity method, which takes two vectors and returns a real number value between 0 and 1. We just have to find the cosine similarity between our chosen card and every other card ever printed and return the most similar cards.

Perone very clearly explains these techniques in both plain English and with precise mathematical notation. The entire process has taught me an enormous amount about processing textually represented data, and mathematically following human intuition about what makes two things “similar.” I know that the skills learned through this project are going to be invaluable to me, and they’ve certainly changed the way I think about textual data.

From the blog CS@Worcester – Adventures in Computer Science by zachstevens2808 and used with permission of the author. All other rights reserved by the author.

Somebody Touched My Spaghetti!

For this weeks blog post I will be looking at an Antipattern known as Spaghetti Code from Source Makings site. Perhaps the most famous AntiPattern it has existed in one form or another since the advent of monogramming languages. Essentially Spaghetti Code is a very cluttered or messy design approach causing it to appear almost like spaghetti, all tangled up.

Nonobject oriented languages appear to be more susceptible to this, and this is more likely to occur to those who have yet to fully master advanced concepts involving object orientation. The general form of this spaghetti code appears in systems with very little software structure. “If developed using an object-oriented language the software may include a small number of objects that contain methods with very large implementations that invoke a single, multistage process flow. “.  On top of this object methods are invoked in a very predictable manner, with a negligible degree of dynamic interaction between any of the objects involved in the system. Causing the system to be very difficult to maintain or extend, allowing no opportunity to reuse the objects and modules in other similar systems. Spaghetti Code usually results in an inexperience with object oriented design technologies, similar to this no design prior to the implementation of the actual code. Another cause would be the result of developers working in isolation because of this their maybe ineffective code reviews.

A solution to this not so delicious mess would be through software refractoring (code clean up). This being an essential part of software development, allowing most efficient clean up. When the structure becomes “compromised” through the mess its support to extensions become more and more limited to the point of useless. Ideally code cleanup should be happening throughout the entire development process but that’s an ideal situation that not everyone (including myself) follow all the time. Doing so on an hourly or daily basis is a good start to this cleanup process.

If simple code clean up is not working what next? Stopping spaghetti code through prevention is usually the best way to resolve this matter. Before you start writing the code, have a plan of what you are designing and how to structure it. Commit to actively refractoring and improving spaghetti code whenever the code needs to be modified is an extremely useful to prevent it.

Essentially if you don’t want to have spaghetti and meatball code you need to think about the overall structure and a good idea of what you are going to be developing.

 

https://sourcemaking.com/antipatterns/spaghetti-code

From the blog CS@Worcester – Matt's Blog by mattyd99 and used with permission of the author. All other rights reserved by the author.

The Customer Wants What The Customer Wants

Hello, again my dear readers!

It appears that this week I am very focused on the customer this week as this article I read particularly focuses on what the customer wants. The article is titled, “Figuring Out What They Expected“. When it references they, the article is referencing the customer. The person you are programming the program or application for. Anyway, let us get into the meat of this article.

The article starts out defining two things. The first is what the user model is. The user model is effectively, what the user is expecting and thinking when they use the program. It bundles everything they know about computers and all their preconceived notions about using them when they sit down and use your program. How do I use this program and what does it do for me, the user? This is the model you are aiming to nail. If no one uses a program, does the program really exist? The answer is yes but we aren’t here to talk about that. The next model is the program model. This is what the programmer programmed into the program on how it looks, works, and operates. The idea in establishing this is that the user model and program model want to be overlapping or ideally mirroring each other. Now there are two ways to do this. The first is to change the user model. Good luck with that one. People are stubborn, stuck in their ways, and how would you even accomplish that anyway? Write a manual on how to use your program? We all know no one reads manuals anymore (although to be honest people really should) and if your program is different from what the user is used to, the user is likely to just not use your program. There is almost always another way. This leaves the program model changing to match the user model. I mean, let us face it, it might suck but you can change your program to match what the user will expect. It may be a pain but if it means your program is used more and ultimately bought more, I think it is worth biting the bullet.

The next part of the article goes over how to actually find that user model. The article has a simple and elegant solution… ask them. Then after you implement them, grab a few people and ask them to test them. Not a large group of people now. Only about 5 or 6 is required; after that, any more tests are fairly repetitive and not that useful. In the end, if the user has to guess how the program works, the program model is not quite there yet.

This article has reinforced my view that in this industry, the end user or customer is the ultimate determinant for a program or application. After all, we are programming an application for someone to use. If they can’t use it, its no good to them. I will admit, I’m surprised that only 5 or 6 people for usability testing are the norm. I do have a new appreciation for the Apple way of thinking where the simplest way to do something, is the way to do it.

Until next week readers!

From the blog CS@Worcester – Computer Science Discovery at WSU by mesitecsblog and used with permission of the author. All other rights reserved by the author.

Test Case VS Test Scenario

For this week’s blog, I chose to read the Test Scenario VS Test Case from the softwaretestinghelp website.

Test Case – a concept which provides detailed information on what to test, steps to be taken and what the expected result would be. It is more about documenting details. It’s important when testing is in another place than the development team. Easier to get the devs and QA team in sync. There is only a one-time documentation of all the test cases and can be easily tracked in the future. Test cases are also helpful to when reporting bugs. Testers have the reference to the case ID’s and do not require mentioning every detail of the case. It is also helpful to new testers since all the test are already laid out.  But it is time and money consuming as it requires more resources to detail everything.

Test Scenarios – a concept which provides a one-line information about what to test. It is more about thinking and discussions rather than listing everything. It is more important when you have a time constraint and most members understand what is happening.  It is better because it can save time and makes everybody think about what to test. A good test coverage can be achieved and it reduces repeatability.  But, if created by a specific reviewer or other users, they might not be in sync and cause confusions. This type of test also requires more discussion and team efforts.

I think this is a great read as it talks about the standard or old way of testing against the preferred testing by the new generation of software testing community. Test Case is the standard way of testing systems. While using a Test Scenario is new, it offers easier documentation when it comes to testing (assuming everybody understands what the system does). Although it seems like it is not beneficial since most companies change employees here and there, I can see test scenarios saving a lot of time just in the documentation. Learning about it opens up a lot of possibility in the way we think about testing but then again, there are already automated testing so test scenarios might not offer much in the future.

From the blog CS@Worcester – Computer Science by csrenz and used with permission of the author. All other rights reserved by the author.

Dynamic Programming

Summary

In the article Exploring Dynamic Programming, Ross Rhodes goes over three examples of dynamic programming in increasing difficulty: nth Fibonacci Number, Traversing a Matrix, and Matrix Chain Multiplication. These are problems that have straightforward but very inefficient approaches that can be solved via dynamic programming techniques such as memoization, which is an optimization technique that stores the results of expensive function calls and returns the cached result when the same inputs occur again. For example, in the case of calculating the nth Fibonacci Number for multiple different values for “n”, rather than performing those calculations again you can instead store already calculated values.

Although the three examples provided are each examples of dynamic programming, they each have moderately different approaches to solving their respective problems. As Rhodes says at the end of the blog post, these examples only scrape the surface of what dynamic programming can be used for.

Reaction to Content

I chose this topic for this week’s blog post because it was something I hadn’t been exposed to significantly. While I’ve known of the technique and its applications, I hadn’t used it for anything other than a similar application of the Fibonacci example provided. The other two examples provided are notably more complicated and helped to provide provide more insight into what situations dynamic programming can be used to solve.

Overall, while I think this article was useful for understanding dynamic programming, I think the best way to understand it is to solve problems using these techniques and to come up with your own solutions for them. That way you can really internalize these concepts and you can spot when you’ve run into a problem in which dynamic programming could be used. Just reading through these examples alone and trying to follow through the thought process won’t necessarily be enough when you have to solve a unique problem on your own.

I think this topic is definitely something that should be understood, as even if you somehow never ran into a real-world situation that dynamic programming would be useful for, understanding it will only make you a better programmer. And if nothing else, it’s likely to come up at some point in an interview.

Source: https://blog.scottlogic.com/2018/01/30/exploring-dynamic-programming.html

From the blog CS@Worcester – Andy Pham by apham1 and used with permission of the author. All other rights reserved by the author.

TypeScript and Object-oriented Programming Fundamentals

pasted image 0

Anders Hejlsberg is known as the creator of TypeScript. On 2010, he and his team began developing TypeScript, and in 2012 they released TypeScript 0.8 for the first time. There were several versions of TypeScript released since 2012. The last version announced is TypeScript 3.2 RC, which is the release candidate of the next version. According to Anders Hejlsberg, JavaScript is TypeScript, but TypeScript is not JavaScript, which makes TypeScript a superset of the JavaScript language. That means that any valid JavaScript code is also valid TypeScript code.

TypeScript has additional features and does not exist in the current version of JavaScript, supported by most browsers out there. In TypeScript, we have this concept of strong or static typing. If you have worked with languages like C# or Java, you know that in these languages when we define a variable, we need to specify the type of that variable. In TypeScript typing is optional so we don’t have to use this feature. But using this feature makes our applications more predictable and it also makes it easier to debug them when something goes wrong. Typescript also brings Object-oriented features that we have missed in JavaScript for a long time. We have the concepts of classes, interfaces, constructors, access modifiers like public and private, fields, properties, generics, and so on. Another benefit of using TypeScript is that with TypeScript we can catch errors at compile time instead of the runtime, of course not all errors but a lot of them. There is a compilation step involved, and when we compile with TypeScript code, we can catch these errors and fix them before deploying our application. And finally, another benefit of using TypeScript is that we get access to some great tools out there. One thing that I personally love about TypeScript is the autocomplete suggestions that we get in our code editors as we are coding, like Atom and Visual Studio Code.

TypeScript is a beautiful language but the browsers that we use every day don’t know TypeScript, and it is very unlikely that the browsers are going to support TypeScript in the future. So, we need to compile or more accurately Transpile our TypeScript code into JavaScript code.  This is part of building an application. Whenever we build an application TypeScript compiler kicks in and it Transpiles TypeScript code into JavaScript code that browsers can understand.

 

References: https://en.wikipedia.org/wiki/TypeScript,

https://channel9.msdn.com/posts/Anders-Hejlsberg-Introducing-TypeScript,

 

From the blog CS@Worcester – Gloris's Blog by Gloris Pina and used with permission of the author. All other rights reserved by the author.