Monthly Archives: November 2017

11/27/2017 — blog post 10 CS443

https://www.ibm.com/developerworks/rational/library/11-proven-practices-for-peer-review/
The last two blog posts on code review have discussed about suggestions for improvements, its overall advantages, and a general overview of what exactly it is, this weeks blog post on code review will focus on the statistics. In general, the post lists 11 proven practices based on experiments and surveys from experienced software developers that has helped team members improve code review abilities. Why is this important? Suggestions from experienced developers are based on experiences, but this post crunches up the numbers in case studies to demonstrate its effectiveness and to suggests 11 proven practices based on the statistics. This is why I chose this article this week in order to emphasize the importance of code reviews.
In my opinion, this article is one of the best I have seen so far for providing suggestions for code reviews and crunching out the numbers as well. As introduced, the suggestions from this post were compiled from code review studies and collections of lessons learned from over 6000 programmers from over 100 companies. The main issue with code review as suggested in the article is efficiency that is reviews often are too long to be practical. The conclusions from the article were from 2500 code reviews, 50 programmers and 3.2 lines of code at Cisco Systems. The study tracked the production teams with members in Bangalore, Budapest, and San Jose for 10 months.
The first suggestion is to review fewer than 200-400 lines of code at a time. The cisco code review study revealed that the optimal range for finding defects is 200-400 lines of code at a time. After that, the ability to find defects decreases. The statistics is that if 10 defects existed, it is likely that the team will find 7 to 9 of them. What is interesting is the graph shown in figure 1 of the defect density against the number of lines of code under review. According to the graph as the number of lines of code under review increases beyond 200, the defect density drops off considerably. So, the optimal effectiveness is from 200-400.
The second suggestion is to aim for inspection rates that are fewer than 300-500 LOC per hour. The inspection rate measures how fast the team is able to review codes. According to figure 2, the effectiveness of inspection falls off when more than 500 lines of codes are under review. Finally, the last interesting point is to never review code for more than 90 minutes. The article suggests that developers take enough time for a proper, slow review, but no more than 90 minutes. It is generally known from many case studies that code reviews after 60 minutes just diminishes in productivity, effort and concentration. Most reviewers simply gets tired and stop finding additional defects. Some will probably not be able to review more than 300-600 lines of codes before their performance drops. So, the rates of finding bugs deteriorates after 60 minutes. Overall, code reviews should not last more than 90 minutes.
I chose this article because the suggestions were too good to pass up a blog post. I find the first 3 suggestions to be helpful to keep in mind when code reviewing as effort always diminishes with time.

From the blog CS@Worcester – Site Title by myxuanonline and used with permission of the author. All other rights reserved by the author.

Approaching Complex Code

So for this week, I have decided to read “Code Smells: Too Many Problems” from the Jet Brains and Intellij Idea blog. The reason I have chosen this blog is because while I have my experience of parsing codes, I do need another way to approach complexity. It will help in figuring out the simplest step to take first when there is complexity in coding and not be overwhelmed by it.

This blog post basically goes over three approaches that is figuring out how to tackle complex code. Each of the approaches have their pros and cons. The first approach is simple but effective to do and that is break down the method into smaller sessions. While it might help for methods with multi responsibilities, it does not make some other smells more approachable to work with. However, it also does help in determining when is the right time to refactor.  Second approach is to work on one smell at a time. It is a variety of steps that the author has showed to get this point for the series of code smells. This is the less risky and easier to reason approach that gives simpler code. The third approach is to step back and try to model the problem. Instead of looking at individual smells or individual lines of code, this approach introduces new domain objects to model what is happening in the code. It encourages a bit of redesign and may lead to new classes that can help with other areas of code, but be aware that it will have the same problems as the other two approaches if not used carefully.

Based on the contents of this blog, I would say this was a pleasant read on getting started in tackling complex code. The author was able to show how the approaches work with a sample code and addressed the drawbacks from it. It was easy to understand and has the What I did to getting started is to write down one method at a time and try to refactor it based on the design it has been used on. Sometimes I would try to write down comments if I do need to have a clear reminder of what I’m trying to do while refactoring.

What I learned from this blog is that if breaking down the methods and refactoring happens to lead to an inconvenient end, then perhaps undoing a few steps back is not a bad idea. The idea I have gotten from this blog is that all the tests should pass, and commits should be small, but very frequent. This helps in making sure that it is easy to stay back and not having to be in a place that can lead to a complete halt. For future practice, I shall try to use the rule of three that will help in determining if a method is needed to be broken down into smaller pieces.

Link to the blog: https://blog.jetbrains.com/idea/2017/09/code-smells-too-many-problems/

From the blog CS@Worcester – Onwards to becoming an expert developer by dtran365 and used with permission of the author. All other rights reserved by the author.

Video Game Career Tips

I love video games and the prospect of making a career out of them has always been a dream that I believed too good to be true. However, the game industry has been on the rise for decades and increasingly proves to be a real career choice with the added benefit that even a small team can produce something that stands out. With this dream in mind, I’ve read the following article on how to get started in the game industry with advice and interviews from current game developers.

https://www.theguardian.com/technology/2014/mar/20/how-to-get-into-the-games-industry-an-insiders-guide

The first question asked to the panel is “What is the best way to start making game?” To which many emphasized the importance of learning the basics of coding. While some game development tools allow you to get started with little coding knowledge, its unavoidable that you eventually learn a programming language or two, C++ is recommended specifically. Game developments tools such as unity, RPG maker, and game maker studio help significantly with transferable concepts of what goes into making a game.

Another question was “If someone is looking to set up a small studio themselves – what advice would you give them?” A few panelists strongly advised not starting a studio early on and that a better choice would be getting experience in an already established studio before gaining the confidence to branch off on your own. However, if you were to start a studio you absolutely need a great programmer as well as an artist. Also it is very important that you have a team member who knows the business of the industry. Byron Atkinson-Jones shares that “the making of the game, that’s actually the easiest part. Managing things like business finances, making sure you can all eat regularly, marketing, PR, legal stuff, QA and selling the game once it’s done are the hardest.”

The next question asked was “Are there any key skills that people should have or things they should know that aren’t obvious or aren’t taught on design/coding courses?” To which many of the panelists stressed the importance of communication within a team. Learning to be a nice person while being open to criticism for the sake of the project are invaluable traits that cannot be taught in schools. You could be very skilled and experienced but if you don’t get along with group members and refuse to communicate effectively, your project will suffer greatly.

The last question ill go over in this post is “Is a degree in computer games programming or design a necessity?” The short answer from most of the panelists is no, you can go a long way with passion and devotion to video games as long as you have the portfolio to back it up. Aj Grand-Scrutton expresses that “a degree is effectively gravy compared to an actual portfolio” emphasizing that real experience dominates over just having a degree.

From the blog CS@Worcester – CS Mikes Way by CSmikesway and used with permission of the author. All other rights reserved by the author.

Re: Angular, TypeScript and Final Project

My teammate and I are currently working on a Blackjack card game, which we will present to our class during finals week. I’ve spent the last few weeks trying to become more familiar with Angular and TypeScript for this project, and I believe I am starting to make some good progress. For instance, I have figured out how to build a card deck, shuffle it, and display images of these cards to the user.

There is a great online instructor named Mosh Hamedani with a series of tutorial videos for both Angular and TypeScript. I’ve already watched several of his videos and found them extremely useful and informative. The instructor has a personal blog as well. I would like to discuss one of his blog entries in particular, entitled Angular 4 in 20 minutes.

First, a disclaimer. I certainly did not learn Angular in 20 minutes, but Mosh’s insight and clear explanations are helping me understand Angular and TypeScript concepts more than any other tutorial I’ve tried so far. 

Mosh’s blog entry is a synopsis of one of his free tutorial videos, which is approximately two hours long. I’ve watched the entire video twice already and I believe it is definitely worthwhile to anyone who is trying to learn Angular and TypeScript. Here are a few important points that Mosh brings up in his video synopsis:

He explains that Angular is a framework for building applications in HTML, CSS and TypeScript/JavaScript. He also answers the question of why a developer would want to use Angular rather than other alternative methods. I have to agree with Mosh here that learning TypeScript with the Angular Framework seems a whole lot easier than, as he puts it, “vanilla JavaScript.” I believe this is due to the “IntelliSense” offered within Angular and TypeScript which is currently unavailable in JavaScript alone. When comparing TypeScript code side by side with JavaScript, in my opinion, the former is much easier to understand than the latter.

Mosh also goes through a step-by-step process on how to install everything needed to run the Angular framework, including how to create a new application project from the terminal. Next, he uses MS Visual Studio to go through the project layout and explains every single file that was created, including their functions and purposes. He then demonstrates how to generate new components within the project and how to connect them with the main application module.

The free video posted in Mosh’s synopsis blog entry is part of a 30+ hour long Udemy course video compilation that typically costs at least $200. Fortunately, there is a great “Black Friday” sale going on where I was able to purchase this entire course for just ten dollars. I feel I am making great progress in learning Angular and TypeScript; I honestly believe it would have not been possible without his videos and blogs. I am certain I will continue to reference Mosh’s insightful blog entries and tutorial videos, and apply what I’ve learned from him during my professional career.

 

From the blog CS@Worcester – Jason Knowles by Jason Knowles and used with permission of the author. All other rights reserved by the author.

#7_343

From the blog CS@Worcester – Not just another CS blog by osworup007 and used with permission of the author. All other rights reserved by the author.

#7_443

From the blog CS@Worcester – Not just another CS blog by osworup007 and used with permission of the author. All other rights reserved by the author.

Continuous Development

Continuing on with the TestOps posts by the, well, awesome Awesome Testing blog, is Continuous Development. Which is actually very interesting as it was a large part of what was taught in my Software Process Management course last year, so it was an enjoyable surprise to see this as the next covered topic.

Generally speaking, Continuous Development is, according to Wikipedia, “the process of executing automated tests as part of the software delivery pipeline to obtain immediate feedback on the business risks associated with software development. ”

The first step is Continuous Integration and unit tests. After every single commit by a developer, the main branch app should be compiled and built, and then unit tests should b executed to give the quickest feedback possible. The post suggests using mutation testing, testing that adds random faults to your code to see how well your tests perform, to test the unit tests themselves to see how good they are. After that, the developer should be made aware of their commit changed overall code coverage statistics.

The next step is Continuous Delivery or Automated Deployment. One should do numerous test environment deployments to test the deployment process of the application as well. After this is testing higher level things, such as functionalities on the integration or API level. End to end testing is very expensive, resource-wise, and should be done sparingly.

After that is performance testing, using a testing environment as close to the production environment as possible. You want to see how the application handles heavy loads. And then is security testing, to make sure the application is as safe from being hacked as you can manage, and then the hardest step, exploratory testing. This is a manual exploration of the application, that takes a lot of time and resources. It should be done sparingly as well.

Overall, this was another nice intersection between software development and testing. It was also a good reminder of concepts I learned in the very recent past which I found very interesting at the time. The ability to streamline the process for a developer and to give them feedback as quickly as possible is incredibly important, its ability to foster greater productivity readily available. To create such, there a many useful tools out there for testers and developers alike. Its a very straightforward example of testing directly helping developers, which is nice to see.

Original Post: http://www.awesome-testing.com/2016/10/testops-3-continuous-testing.html

From the blog CS@Worcester – Fu's Faulty Functions by fymeri and used with permission of the author. All other rights reserved by the author.

AB Testing – Episode 5 by Brent Jenson and Allen Page.

In this week’s testing episode, Brent and Allen begin by addressing end-to-end automation testing. It seemed that the original purpose of automation testing was being bypassed. Automation testing is best suited for short tip test and regression checks. But by implementing dev. architecture in testing, we are able to create a more organized and more structural development environment. Brent continues by addressing an issue that happened at amazon while he was there. They didn’t seem to have enough testers because whenever an update was made, it was reverted back due to bugs and collisions with other programs that were later found. The reversion process caused developers to place program signals and interrupts that would be triggered when parts of the apps or project was breaking up. This ended up educating the team about the need and importance of more testers to be able to find bugs and faults in the programs and update. Project rollout and changes often have drastically changed on overall product quality in the eyes of the users. It is often overlooked that creating proper checkpoints in a program creates great barriers against loss of services since it would be triggered should there be an update that can affect the performances of the program. Teaching programmers the testing techniques forces them to refactor their codes and build it to withstand updates that can break it. Also they tend to write codes that can be easily tested for bugs and holes. This practice creates a unique optimization of cost, which creates very complex codes that are not easily tested using automation since outputs cannot be predicted. Another tool that was introduced in the podcast was automated gui testing. This is a testing feature that is often used by developers to build proper test cases and scenarios. Automated GUI testing increases testing efforts, speed up delivery time, and improve test coverage. This is the main reason why teams that adopt the agile testing methodologies and continuous integration practices continue to invest in automated testing tools that can be used to perform front-end testing. Implementing GUI testing becomes more complex as time progresses and is almost never a linear process. It is a demanding part of the development lifecycle that forces QA teams to dedicate a large amount of time to. To sum things up , The best automated testing tools will not only have strong record-and-replay capabilities and flexible testing frameworks but they help you cut down on testing times and increase the speed to delivery.

 

 

 

 

LINK

https://testingpodcast.com/?powerpress_pinw=4538-podcast

https://smartbear.com/learn/automated-testing/manual-vs-automated-gui-testing/

From the blog CS@Worcester – Le Blog Spot by Abranti3 Dada Kay and used with permission of the author. All other rights reserved by the author.

What Is Project Valhalla?

In this blog post, Justin Albano explains what Project Valhalla is.

“Project Valhalla is an OpenJDK project started in 2014 and headed by Brian Goetz with the purpose of introducing value-based optimizations to Java Development Kit (JDK) 10 or a future Java release. The project is primarily focused on allowing developers to create and utilize value types, or non-reference values that act as though they are primitives. In the words of Goetz: Codes like a class, works like an int.”

“Project Valhalla has a very specific purpose: To cease the requirement that Java developers choose between performance and abstraction.”

What Are Value Types?

“Value types are groups of data whose immediate value is stored in memory, rather than a reference (or pointer) to the data.” Doing this means saving memory otherwise taken up by overhead data. “Taking data and directly placing its value into memory (rather than a reference) is called flattening and its benefits are more acutely demonstrated with arrays.” “In an array of value types, the values are directly placed into the array and are guaranteed to be in contiguous memory (which increases locality and, consequently, the chance of cache hits). This idea is illustrated in the figure below:

The benefits to using Value Types are listed by the author:

  • Reduced memory usage: There is no need for additional memory used to store object metadata.
  • Reduced indirection: Because objects are stored as reference types in Java, each time you access it, it first must be dereferenced, causing additional instructions to be executed.
  • Increased locality Using flattened value objects removes indirection, increasing likelihood values are adjacently stored in memory.

One of the major differences between reference types and value types:

“The identity of a reference type is intrinsically bound to the object while the identity of a value type is bound to its current state”

The reason I picked this resource is because I did not know about Project Valhalla and it seemed like an interesting article to learn about. It’s not quite ready to be released in JDK but it’s a useful addition to Java that increasing performance and saves memory. I feel the content of the post was interesting and informative. I learned the benefits of using Value Types versus using pointers and the improvements that have been made to Java. Value Types may soon be released in an upcoming JDK and I would like to know how to utilize them when saving memory is crucial.

Additional resource: Minimal Value Types article

The post What Is Project Valhalla? appeared first on code friendly.

From the blog CS@Worcester – code friendly by erik and used with permission of the author. All other rights reserved by the author.

Blog #6 – AngularJS for Absolute Beginners

https://medialoot.com/blog/angularjs-for-absolute-beginners/

For this week’s blog, I chose a tutorial titled “AngularJS for Absolute Beginners” by Jenn Coyle because our final project is going to be coded in Angular, so I think that finding different blogs of tutorials for Angular is a good idea to help familiarize myself with the language. I like this blog by Jenn Coyle because it’s a straight-forward approach to Angular and does a great job at making the tutorial easy to follow so that it doesn’t overwhelm the person reading it.

 

Coyle’s blog starts off with an important sentence “Let’s face it, writing web applications is hard.” I like that Coyle started off her blog with the sentence because she assures the reader (who is probably just starting to learn Angular) that if they think that writing web applications is hard, then that’s perfectly fine because it’s supposed to be. I also think this adds a sense of comfortability for the reader.

Coyle goes on to say that AngularJS eases the pain of battling to make a functional front-end. The blog makes clear the prerequisites you’ll need to learn Angular, which is just a basic understanding of HTML, CSS, and JavaScript.

Coyle’s next point is why Angular is helpful. Anyone who has written code for a front-end web app has probably written code that’s soaked with bad practices. She defines imperative programming which changes the state web application by creating a flow that appends new elements to a list, which can be seen as negative because it can hide important functionality. Other problems that Angular address is direct DOM manipulation, global scope, and lack of organization.

Because of these problems, a meaningful and reusable structure for application was created: AngularJS. AngularJS is a front-end JavaScript framework and provides us with an opinionated way to build a powerful front-end for applications. It is opinionated because it forces the user to follow a specific pattern.

Coyle explains the structure of Angular well: that it is built to isolate specific responsibilities of an application from each other. And is made of different components. Coyle also goes over the other basic structures of Angular like the directives, controllers, and views.

Coyle then goes on to giving a step by step instruction on how to create an AngularJS program.

  1. Create the module
  2. Initialize the View
  3. Create the controller
  4. Set up $scope
  5. Tell the view about the controller
  6. Bind data to the page
  7. Create view layout
  8. Set up data on the controller
  9. Bind the todos to the view
  10. Set up the template
  11. Finishing a todo
  12. Removing the todo in the controller
  13. Add a new todo
  14. Adding a todo to the controller

All of these steps come with code and an explanation which helps make everything clear. This and the thorough explanations that Coyle gave for everything in AngularJS is why I chose this blog as a good tutorial for beginners. It breaks everything up into an approachable process for anyone who wants to learn AngularJS.

From the blog CS@Worcester – Decode My Life by decodemylifeblog and used with permission of the author. All other rights reserved by the author.