Category Archives: Computer Science

4 Warning Signs That Your Beta Testing Process Is in Danger

Original Article

In this article, Ran Rachlin goes over the four warning signs that may signal that your current beta testing process might be in trouble.

The first point that he talks about is if your beta testing is having huge delays. The main way Rachlin says to over come this obstacle is to set strict deadlines for the team (this includes testers). He also points out to make sure that the objectives for the deadline are clear and reasonable; this is to make sure that everyone is on the same page and no one gets worried about what needs to be done. The other thing that this step includes is to make sure that constant contact is made with the testers so that communication is not lost and you receive better feedback.

The second item Rachlin touches upon is making sure that you don’t underestimate how much time it will take to go through testing. If you are on a tight deadline to get your product out, he recommends a few things that you should do. One is to make sure you have experienced tests for your product. Two is to stay in contact with the testers (as mentioned above), and inform them of the “time crunch” they are dealing with. These steps will help make sure that the testers know what they are doing before hand and are aware of the challenges they need to overcome.

The third thing is to be aware of the well being of the testers you’ve hired. If you’re testers become frustrated with the product they are working on, this can hinder the speed of testing the application. Again it helps to keep contact with the testers to make sure they are on track with the work, but you also want to make sure that they feel appreciated for the work they are doing; especially if they are in a time crunch. It may also help to give incentives near the end of a deadline to encourage testers to finish early or for a job well done. Although you should always have backup plan just in case things become too much for the testers that you have.

The last point that Rachlin makes is to ask yourself “Are we testing the target market and devices?” He says that you must make sure of two things; that you make sure the testers you have are from the target market and that you are using the “most popular devices and carriers in this target market” for your tests. If all of these issues are addressed, you should have minimal issues with they beta testing process for your app.

From the blog CS WSU – Techni-Cat by clamberthutchinson and used with permission of the author. All other rights reserved by the author.

Best Operating System for Developers?

This is a frequent question among newbies to the tech industry.

Obviously it is going to depend on what kind of development you are up to. If you are making a Linux application, you’ll need a Linux OS; if you’re making a windows application, you’ll need a windows OS; if you’re making a mac application, you’ll need a mac OS.

But then the question becomes, what is the best OS overall?

I’ll try and compare the different OS’s using a benchmark. Let’s get to it.

In my personal opinion I’d say that that is you are into web developing, probably Windows is the best to go with as you can check on all reasonable common web browsers that you can’t on a Linux, including Mac which is a version of Linux.

For Linux, it’s good to learn the command line interface because it is very powerful without all the GUI. People underestimate the GUI and how it lowers productivity with higher response times. But that is not a good reason to switch to Linux as Windows has a command line too, and an advanced PowerShell which is on par with Bash.

For Mac users it is the best if you are into android development. You can write your code in Objective C which is good for android devices. I recently heard on a presentation that Java is a battery drainer for android applications. Thus people prefer to use Mac.

And even yet, Linux differs from the different distributions there is. For example, Solaris, which is a UNIX, performs very well Java compared to other Linux’s. It’s probably due to the fact that SOLARIS is made by Sun microsystems, but now owned by Oracle.

capture

Diagram taken from: http://www.phoronix.com/scan.php?page=article&item=intel_atom_os&num=1

But in the real world it all comes down to economics. You can make a lot of money selling software for Windows, and when people give away open source on Linux, there’s hardly any monetary gain in making Linux application. The only time Linux is used in the real world is to as a large server warehouse, where they want to be cheap and avoid paying for licenses.

Works cited.

https://www.quora.com/Which-is-better-for-programmers-in-general-Windows-Linux-or-Mac

From the blog CS@Worcester – thewisedevloper by thewisedeveloper and used with permission of the author. All other rights reserved by the author.

The Art of the Deal: Negotiating your First Job

If you are a college student with basic credentials and are applying for a job or summer internships, you’re at a disadvantage when it comes to negotiation, unfortunately. Because all you care about is to get a job, any job, that pays a little bit more! Then there’s the students that fall into the other category, the ‘exceptional’ category. Joel from the blog joelonsoftware.com, by stackOverflow CEO Joel Spolsky, warns about a tactic, a very common one, used by ‘third-grade recruiters and companies’ called the “exploding offer.” Here’s what happens. They call you for an interview, usually an on-campus one; maybe on their HQ for a second round. You will probably ace it since you are one of the ‘exceptional’ ones. They make you an offer and pressurize you to accept it by making you decide on the offer before finishing all of your other interviews. Career counselors know this, and almost every campus prohibits such tactics by recruiters. But the recruiters don’t care, it seems. Here’s Joel’s explanation: “They know that they’re a second-rate company: good enough, but nobody’s dream job, and they know that they can’t get first-rate students unless they use pressure tactics like exploding offers.”

But now let’s look at how to do the “negotiation” properly.

  1. First, you need to schedule all your interviews close to one-another.
  2. If you happen to get an “exploding offer” here’s how to push back: Say “I’m sorry, I’m not going to be able to give you an answer until January 14th. I hope that’s OK. ” Then you tone-up a little bit.
  3. In the rare case that they don’t accept, accept the offer, but go to the other interviews. Don’t sign anything. If you get a better offer, call them back and tell them that you changed your mind.
  4. Campus recruiters count on student’s high ethical standards. People feel bad when they make a promise to somebody and break it, students are no different. And that’s a commendable behavior, to make good of a promise. But unethical recruiters don’t deserve ethical decision making.

Joel Spolky’s blog was interesting, I think. Since it shows another aspect to your first job that people don’t think about. People are just happy to get a job right out of college and to be making some more $$ bucks $$ than they currently do. And that’s alright, I guess.

Citations.

https://www.joelonsoftware.com/2008/11/26/exploding-offer-season/

 

From the blog CS@Worcester – thewisedevloper by thewisedeveloper and used with permission of the author. All other rights reserved by the author.

Why every developer should learn R, Python, and Hadoop.

Recently I used R for my course project on data mining. The course didn’t require that we use R, or Python. Instead, the course was thought on WEKA. But here’s why I think it should be done on R or Python in future years.

R is a heavy-duty language – R is a powerful scripting language. It will help you handle large, complex data sets. I was struggling to run WEKA with a dataset of no more than 5 million. Since part of data mining involves creating visualizations to better understand the relations of attributes, R seemed to be the natural best-fit for a course on data mining, and not WEKA. WEKA keeps crashing and the algorithms run comparatively faster on R and Python. This is partly due to the fact that R can be used on a high performance computer clusters which can manage the processing capacity of huge number of processes.  One other thing I liked the most was visualization tool that R is equipped with. The graphs and plots of R are so vivid and eye-catching.

Python is user-friendly- Python, similar to Java, C, Perl, is one of the more easier languages to grasp. Finding and squashing bugs is easier in python because it is a scripting language. Moreover, python is a object oriented language. Python is a performer like R. The other good thing is that if you are planning to do some fun oriented things with something called the Raspberry Pi, then Python is the language to learn.

Hadoop – Hadoop is well suited for huge data. Remember the issue I had with WEKA due to the size of my dataset. That problem can be eliminated by using Hadoop. Hadoop will split the dataset into many clusters and perform the analysis on those clusters and combine them together. Top companies like Dell, Amazon, and IBM that own terra-bytes of data have no choice but to use Hadoop.

You need to learn this three tools at a minimum in order to be a good data scientist and to do a good, thorough analysis on a given data.

 

From the blog CS@Worcester – thewisedevloper by thewisedeveloper and used with permission of the author. All other rights reserved by the author.

When to Mock?

Full Post by Uncle Bob

In this blog post, Uncle Bob goes over the basics when you should use mocks, and the advantages and disadvantages of using them.

He first goes over what will happen to your test suite if you have no mocks (he’s also going under the assumption that this is for a large application). The issues that arise when no mocks are used is that, during execution, your tests could take from several minutes to several hours to run. The next problem that could happen when no mocks are used is that your test suite might not cover everything. Bob states that “error conditions and exceptions are nearly impossible to test without mocks that can simulate those errors.” The example he gives for this is if you have files or databases tables that need to be deleted, this task my be to dangerous without the use of mocks. The last thing Uncle Bob brings up is that tests are really fragile if you don’t use mocks. This is because tests can easily be disrupted by faults in the system that are not even caused by what you’re testing.

This may seem that mocks are always the way to go, but Uncle Bob covers what happens when you use too many mocks (for this Uncle Bob goes under the assumption that you have mocks between all classes in your test suite). This first issue is that “some mocking systems depend strongly on reflection,” and if you have too many this can slow down your testing phase. The other thing is that, if you use mocks between all classes, you may end up making mocks that create other mocks. This can create two problems. One, the set-up code you’re using can end up getting convoluted. Two, this can lead your “mocking structure [to] become tightly coupled to implementation details causing many tests to break when those details are modified.” The last thing Bob points out is if you have too many mocks, you may end up have to create extra interfaces “whose sole purpose is to allow mocking,” and this can lead to “over-abstraction and the dreaded ‘design damage’.”

This may make it seem that using or not using mocks can lead to issues, so what is the correct why to use mocks in your test suite? Uncle Bob states that the method he uses for this is to only “mock across architecturally significant boundaries, but not within those boundaries”; this means to only mock when it comes to using any external software or anything that is rather large, like a database or web server. This way of mocking will address most of, if not all, the issues that came up when having no mocks or too many. The other thing that Uncle Bob suggests doing is to always create your own mocks and not rely on other software tools to do it. He points out that most mocking tools have their own language that can become time consuming to learn. Also creating your own mocks maybe more beneficial in the end because it forces you to name your mocks and put them in directories which allows you to use them for other tests. The other benefit to creating your own mocks is that it makes you take time to create your own mocking structure.

In the end, Bob suggests using mocking as little as possible and to strive to design tests to not use mocks.

From the blog CS WSU – Techni-Cat by clamberthutchinson and used with permission of the author. All other rights reserved by the author.

Exploring the Factory Method Pattern

The standard way of creating an object is to instantiate a concrete class directly with the new operator:

SomeClass sc = new SomeClass();

One of the drawbacks of using the new operator to create an object is the need to specify the type of object to create. Specifically, it creates a dependency between your code and the class or type of object created. Sometimes a more general solution is needed, one that allows control over when an instance of an object is created but leaves open or delegates to another class the specific type of object to create.

Decoupling object creation from object use results in code that is more flexible and extendable.

Also, the advice “program to an interface, not an implementation” also applies to object creation. You could create an object with the new operator.

class ClientCodeC{

public clientCodeC(){

SupportingClass sc = new SupportingClass();

}

}

However, the reference to concrete class SupportingClass at line 3 is an example of programming to an implementation. If there is nothing about clientCodeC that requires an instance of SupportingClass, it is usually better to program to an abstract interface that will result in the creation of an instance of SupportingClass or another type that is compatible with the client code.

But how do we do that? That is precisely the problem solved by the Factory Method design pattern.

The Factory design pattern has four main components. The class Creator declares a factory method createProduct(). createProduct() returns an object of type Product. Subclasses of Creator override the factory method to create and return a concrete instance of Product.

capture

There are many forms. Here’s one.

capture

Here’s another.

Capture.PNG

Notice how it differs from the structure diagram in the previous example. In the previous example there is a one-to-many relationship between concrete creators and concrete products. With iteration in Java there is a one-to-one relationship between  concrete collection classes and concrete iterators. Both are valid forms of the pattern.

But what happens when we have multiple store franchises. Let’s say that we have two stores, one wants to make NY style pizzas and the other wants to make Chicago style pizzas.

Now we’re in a situation where we are going to need either a much more complex factory or two different factories to make the two different styles of pizza or we’ll have to put all the logic for creating the different pizzas back into the store. All this options make for a more complicated code, and many more places where we’ll have to change code if we add new pizza types or change a topping.

The current implementation, below, also violates the open/closed principle. There is no way to extend the current solution to work with other credit card processing services without modifying the code.

The store needs to use the same order pizza method so that the prepare, bake, cut, box the pizzas in the same way, but have the flexibility of creating different styles of pizza objects, one creating NY style pizzas while the other creates Chicago style pizzas. What we want is to have two different kinds of pizza stores that use the same time testing algorithm to make pizzas but that can make two different kinds of pizzas. But without being dependent on the concrete types of pizzas on any way.

To create our store franchises, we’ll change the design to use the Factory Method Pattern.

The Factory Method Pattern defines an interface for creating an object, but lets subclasses decide which class to instantiate. Factory Method lets a class defer instantiation to subclasses.

Let’s see what that means by looking at the new design for the pizza store.

First we’re going to change the pizza store class so that it is an abstract class. All the pizza stores in the franchise will extend this class. To make sure that all the pizza stores use the same method for preparing pizzas for customers, in other words the same algorithm, the pizza store abstract class will implement the order pizza method. This is the shared code all the stores use for preparing the pizza, no matter what store we’re in and what type of pizza it is.

However, the two pizza stores that extend this class, NY store and Chicago store, will implement their own create pizza methods. That way each store gets to decide what kind of pizzas it will make. The NY store will make NY style pizzas while the Chicago store will make Chicago style pizzas. This is what is meant in the factory method pattern definition, when we say the factory method pattern lets a class defer instantiation to subclasses. Here the abstract class pizza store is deferring instantiation of concrete pizzas to the individual pizza store that extend it.

The pizza class is basically going to be the same, but now we’re going to have many different types of pizzas. A set of pizzas for the NY style pizza store and a set of pizzas for the Chicago style pizza store. The NY pizza store will be responsible for creating the NY style pizza and the Chicago style pizza store will be responsible for creating the Chicago style pizzas.

To create a pizza, we’ll first instantiate the kind of store that we want. Imaging choosing between walking into a NY style pizza store or a Chicago style pizza store. Once we’re in the store, we can order a pizza. Remember this method is implemented by the abstract class pizza store. So, no matter which pizza store we’re in when we make an order, we’re guaranteed to get the same brilliant pizza making algorithm to produce quality pizzas.

The first step in the order pizza algorithm is to create a pizza. The create pizza method is implemented by the individual stores. So, if we’re in a NY style pizza store we’ll get the method implemented by that store. We pass the type of pizza to the create pizza method. This method creates the right type of pizza based on the type and once it is returned to the order pizza method in the store, the order pizza method can prepare the pizza to the customer.

Here’s our main class.

Recall that in the Simple Factory idiom, we first created a factory and then passed the factory to the store. We no longer need the factory because the pizza stores are created the pizzas directly. Remember that the pizza stores order pizza method, implemented in the pizza store abstract class creates the kind of pizzas that we want depending on which store we call the method on.

PizzaStore is implemented as a Factory Method because we want to be able to create a product that varies by region. With the Factory Method, each region gets its own concrete factory that knows how to make pizzas which are appropriate for the area.

Works Cited

Burris, Eddie. “Chapter 8 Factory Method.” Programming in the Large with Design Patterns. Leawood, Kan: Pretty Print, 2012. 110-122. Print.

Freeman, Eric, and Elisabeth Freeman. “Chapter 4. The Factory Method Pattern: Baking with OO Goodness.” Head First Design Patterns. Sebastopol, CA: O’Reilly Media, 2005. N. pag. Print

 

From the blog CS@Worcester – thewisedevloper by thewisedeveloper and used with permission of the author. All other rights reserved by the author.

Understanding the Simple Factory Idiom

The goal of the simple factory idiom is to separate the process of creating concrete objects to reduce the dependency of the client from the concrete implementations.

To implement the simple factory we need three things:

  1. A factory, the pizza factory.
  2. The products the factory makes which are the pizza objects.
  3. The client that uses the factory, which is the pizza store.

So, let’s take our pizza creation code, encapsulate it, separate it. And we’ll encapsulate it in a class called Factory.

1

Why a Factory? Because this is a class whose sole responsibility is creating pizzas — it’s a pizza factory.

To do that we’re going to take this conditional code for creating pizzas and put it into a separate class, in a method name createPizza. Each time we want a pizza, we’ll call the method, pass it a type, and the method will make a pizza for us and return an object that implements the pizza interface.

Capture.PNG

Now all this creation will be in a separate class, nicely separated form the restaurant code. So, let’s integrate this with our client code or restaurant code. Let’s assume that we’ve created a factory object already. And call it to create the pizza, passing it the type variable.

1.png

Our order pizza method no longer has to worry about the concrete type of the pizza. It could be a veggie pizza, a cheese pizza, or a pizza we haven’t even heard of yet. We know whatever type gets returned by the factory, it implements the pizza interface. And that’s all we care about.

So we call this object design the simple factory idiom.

1.png

We start we the client of the restaurant, the pizza store. And then we have our factory. The factory is the only place the concrete types of pizzas are known.  And then we have the products, what the factory makes or pizzas, and there could be many concrete types of those.

To generalize this a bit, we could look at the same diagram without pizzas.

1.png

And here we have the client, a factory, and a set of products that implement a common interface.

Now, one thing to note. This is not a full fledged official pattern. It is more of an idiom that’s commonly used. That said it is the first step to understanding some of the more common design patterns.

And now we’ll put everything together in a main class called the pizza test drive  which will create a pizza store and use it to create pizzas.

Capture.PNG

 

From the blog CS@Worcester – thewisedevloper by thewisedeveloper and used with permission of the author. All other rights reserved by the author.

Read this and you’ll Know more about Design Patterns than you ever did!

Context: Engineers look for routine solutions before resorting to original problem solving. Design is probably the most challenging activity in the software development life cycle. There is no algorithm for deriving abstract solution models from requirements. The best the discipline of software engineering can offer are methods, techniques, heuristics, and design patterns.

Solution: A design pattern is problem-solution pair. Design patterns are discovered rather than invented. Design patterns are paragons of good design. A design pattern, and more generally a design, is an abstraction of an implementation. What is being reused is the structure or organization of the code. The solution provided is the design and not the code. A design pattern is not a concrete design or implementation, such as an algorithm that can be used as-is, but rather a generic solution that must be adapted to the specific needs of the problem at hand. The purpose of the design process is to determine how the eventual code will be structured or organized into modules. The output of the design process is an abstract solution model typically expressed with a symbolic modeling language such as UML.

Pros:

  • Studying design patterns helps to develop the intellectual concepts and principles needed to solve unique design problems from first principles. Design patterns define a shared vocabulary for discussing design and architecture. Catalogs of design patterns define a shared vocabulary at the right level of abstraction for efficient communication of design ideas.
  • Knowing popular design patterns make it easier to learn class libraries that use design patterns. For example, the classes and interfaces that make up the Java IO package are confusing to many new Java programmers simply because they aren’t familiar with the decorator design pattern.
  • Knowledge of design patterns simplifies software design by reducing the number of design problems that must be solved from first principles. Design problems that match documented design patterns have ready-made solutions. The remaining problems that don’t match documented design patterns must be solved from first principles.

Cons (not really! lol):

  • Applying design patterns is easier than solving design problems from first principles but their application still requires thoughtful decision making. You must be familiar enough with existing patterns to recognize the problem is one for which a design pattern exists.

Here’s the important things to remember:

  • First, design (and more generally engineering) is about balancing conflicting forces or constraints.
  • Second, design patterns provide general solutions at a medium level of abstraction. They don’t give exact answers, and at the same time, they are more concrete and practical than abstract principles or strategies.
  • Finally, patterns aren’t dogma.

Here’s where Kids get Confused:

Intent Matters!!!!!

Design patterns are NOT distinguished by their static structure alone.

Can you tell me which represents state pattern and which represents strategy pattern?

1

It is of course impossible. What makes a design pattern unique is its intent? The intent of a pattern is the problem solved or reason for using it. The intent of State pattern is to allow an object to alter its behavior when its internal state changes. The intent of the Strategy pattern is to encapsulate different algorithms or behaviors and make them interchangeable from the client’s perspective. The structure is the same for both solutions.

Works Cited:

Burris, Eddie. “Introduction to Design Patterns.” Programming in the Large with Design Patterns. Leawood, Kan: Pretty Print, 2012. 11-33. Print.

From the blog CS@Worcester – thewisedevloper by thewisedeveloper and used with permission of the author. All other rights reserved by the author.

Google Testing Blog: Hackable Projects – Pillar 1: Code Health

Hackable Projects – Pillar 1: Code Health

In this blog post, author Patrik Höglund talks about how over the years, software development can become stressful to deal with and fix constant issues. The way he suggests to resolve this issue is by making the software more “hackable”; not in the sense of making the software more vulnerable to attacks, but making it easier to modify from a developers stand point. Höglund goes on to say that a hackable project is one that includes easy development, fast builds, good and fast tests, clean code, easy running and debugging, and “one-click” rollbacks. Höglund then goes on to describe the three main pillars of hackability, which are code health, debuggability, and infrastructure.

This post focuses solely on the first pillar of hackability: Code Health.
The first thing Höglund covers are tests that you should use. He says that “unit and small integration tests are probably the best things you can do for hackability” rather than using end-to-end testing. The other thing to testing is that if you have poorly tested legacy code, the best thing to do is refactor it and add tests along the way. Even though this can become time consuming, it’s work it in the end because it leads to a more hackable system in the long run.
The next thing that should be done is to make sure that your code is readable and goes through code review. This means that there should be a reviewer who looks over the code changes to make sure that the new code is consistant with the rest of the code. The changes should also be small and coded cleanly so as to make it easy if a rollback is necessary. Another thing that will help with hackability is making sure that all of your code is submitted in a constistant format.
To reduce risks even more, you should try to consistantly have a single branch for your project. This not only decreases the possible risks, but also reduces the expense of having to run tests on multiple branches. This could possibly back fire though if, as Höglund writes, “Team A depends on a library from Team B and gets broken by Team B a lot.” Höglund suggests that “Team B might have to stabalize thier software for them to use this method and have hackable software.
The last things that Höglund focuses on for Code Heath is making sure that your code has loose coupling, testability, and ways that you can aggressively reduce technical debt.

From the blog CS WSU – Techni-Cat by clamberthutchinson and used with permission of the author. All other rights reserved by the author.

What Makes a Skilled Developer: A Sound Knowledge in Design Patterns

Nowadays top tech-companies are willing to pay huge amounts in recruiting capable architects, developers and designer who can deliver software that is elegant. Elegant software is software that is easily testable, easily extensible, and are free of defects. In today’s day and age, good design of software is critical to the long-term sustainment of the software in the market.

Thus, a software developer must be skilled at the respective areas to be able to develop elegant software. And the only way to acquire these skills is by experience, a lot of it, around 10-15 years of working as a designer or architect would be ideal. But nowadays businesses can’t afford to wait that long to get a developer of that caliber. They want schools to produce these engineers right of their ‘assembly line.’ As such an alternative route is needed. Another way to become a skilled developer is to study design patterns. Design patterns are knowledge in the form of a template that could be used to solve general repeatable solution to a commonly occurring problem. But it is important to note that a design pattern is not the finished product that can be directly translated to code.

But design patterns have its fair share of criticism as well. Very often using design patterns leads to inefficient solutions due to duplication of code. Thus, care should be taken in this regard.

Works Cited

https://sourcemaking.com/design_patterns

Burris, Eddie. Programming in the Large with Design Patterns.

From the blog CS@Worcester – thewisedevloper by thewisedeveloper and used with permission of the author. All other rights reserved by the author.