When Object Oriented Programming (OOP) becomes POO

Something that has certainly been engrained in my programming brain is that code should be as easy to reuse as possible, and this is done through the use of objects in what is known as object oriented programming. In my very first programming class I used Java and most of the academic programming that I have done since that point has also been in Java. Java is a self-described “general-purpose, concurrent, strongly typed, class-based object-oriented language.” As a result of its object-oriented nature, one cannot learn to program effectively in Java without learning how to program in an object-oriented manner. While object-oriented programming can often allow for the efficient reuse and maintenance of code, it may also overcomplicate things in certain instances. Knowing when and where to step away from an object-oriented approach to programming can be important to creating something that is easy for others to understand and build from.

In a Coding Horror post titled “Your Code: OOP or POO?” from March 2007, Jeff Atwood explains why programming in a way that considers fellow programmers who may work with your code after you is more important than mindlessly creating objects for the sake of creating them. Atwood goes on to explain why it is the principles of object-oriented design that are truly important. These are things like encapsulation, simplicity, and the reusability of your code. Atwood stresses that if you attempt to “object-ify” every concept in your code, you will often be introducing unnecessary complexity. He uses an interesting metaphor that compares adding objects to adding salt to a dish – “a little goes a long way.”

It must be made clear that Jeff Atwood and all of the other programmers that he mentions in his post are not against OOP. Rather, they are against the abuse and misuse of OOP by those who do not understand where and when creating objects is beneficial and where it is simply cumbersome or clumsy. Object-oriented programming is an extensively powerful tool for creating projects that are reusable and easily maintained or changed. What is important to take away from Atwood’s post is that it is the way that new programmers are being brainwashed into thinking that every piece of code that they write must somehow become an object lest it be poor programming is what actually causes problems. Although never directly stated, I took Atwood’s post as a call to educate new programmers about the potential pitfalls of writing overly complex object-oriented code in place of a simpler alternative that does not involve objects.

From the blog CS@Worcester – ./George Matthew by gmatthew and used with permission of the author. All other rights reserved by the author.

Importance of Security

Source: https://blog.codinghorror.com/hacker-hack-thyself/

Recently, I read up on the blog Hacker, Hack Thyself by Jeff Atwood on Coding Horror. This blog puts an emphasis on database security and the efforts it takes to have a secure platform that can handle large threats such as a breach in the database. The point being made is that as time passes, the older generation of security must be improved to handle newer technology available to them and to the threats as well. He provides an example of the time it takes to crack passwords under the assumption that hackers will somehow get a copy of the database. This means that hackers can use various methods to crack the passwords hashes that are considered a secure way of storing passwords. One of these methods include brute force which is gated by time, that is gated by hardware. Algorithms designed for the hardware back then must be improved to meet newer security standards. The hardware in question are GPUs, if you pit the newest generation of GPUs like the GTX 10 series, or specifically the GTX 1080 Ti against older algorithms. You can see how much it blows the older gen. GPUs out of the waters. However, there are other ways to crack the hashes, which means that brute force is not necessarily the best option. The driving message behind the blog post and the title is, to secure your platform, you must first try to break into your own safeguards to patch holes in the system that you created before it’s too late.

The reason I decided to read up on this blog post regarding security would be the latest breach of the company Equifax which affected hundreds of millions of people who used their service. This subject is not entirely new to myself because of a previous CS Course that dealt with Computer Networking, Security, & Databases. Which means this blog does reinforce the idea that no security network is entirely fixed of holes and they must be improved to counteract newer technology that could pose as a threat.

One thing, that stood out to me from the blog post would be the statement that many users do not treat their email address as an important asset. I wholeheartedly agree that many users do not entirely understand the risks of having an unknown individual gain access to their email account. It is not entirely wrong to say that many users only use a single email, which means that every account on many websites that are connected to that account is susceptible to being accessed. Thinking ahead, this is a serious problem which could be prevented if they use a stronger password and newer security measures such as two-factor authentication. For myself, I use all the security measures available in each service, as well as my own. For example, every account has a different randomly generated password using as many characters allowed, with all assortments of randomization while tending to the restrictions that might apply upon creation. Which begs the question of why some services restrict certain characters from being used in a password?

From the blog CS@Worcester – Progression through Computer Science and Beyond… by Johnny To and used with permission of the author. All other rights reserved by the author.

Importance of Security

Source: https://blog.codinghorror.com/hacker-hack-thyself/

Recently, I read up on the blog Hacker, Hack Thyself by Jeff Atwood on Coding Horror. This blog puts an emphasis on database security and the efforts it takes to have a secure platform that can handle large threats such as a breach in the database. The point being made is that as time passes, the older generation of security must be improved to handle newer technology available to them and to the threats as well. He provides an example of the time it takes to crack passwords under the assumption that hackers will somehow get a copy of the database. This means that hackers can use various methods to crack the passwords hashes that are considered a secure way of storing passwords. One of these methods include brute force which is gated by time, that is gated by hardware. Algorithms designed for the hardware back then must be improved to meet newer security standards. The hardware in question are GPUs, if you pit the newest generation of GPUs like the GTX 10 series, or specifically the GTX 1080 Ti against older algorithms. You can see how much it blows the older gen. GPUs out of the waters. However, there are other ways to crack the hashes, which means that brute force is not necessarily the best option. The driving message behind the blog post and the title is, to secure your platform, you must first try to break into your own safeguards to patch holes in the system that you created before it’s too late.

The reason I decided to read up on this blog post regarding security would be the latest breach of the company Equifax which affected hundreds of millions of people who used their service. This subject is not entirely new to myself because of a previous CS Course that dealt with Computer Networking, Security, & Databases. Which means this blog does reinforce the idea that no security network is entirely fixed of holes and they must be improved to counteract newer technology that could pose as a threat.

One thing, that stood out to me from the blog post would be the statement that many users do not treat their email address as an important asset. I wholeheartedly agree that many users do not entirely understand the risks of having an unknown individual gain access to their email account. It is not entirely wrong to say that many users only use a single email, which means that every account on many websites that are connected to that account is susceptible to being accessed. Thinking ahead, this is a serious problem which could be prevented if they use a stronger password and newer security measures such as two-factor authentication. For myself, I use all the security measures available in each service, as well as my own. For example, every account has a different randomly generated password using as many characters allowed, with all assortments of randomization while tending to the restrictions that might apply upon creation. Which begs the question of why some services restrict certain characters from being used in a password?

From the blog CS@Worcester – Progression through Computer Science and Beyond… by Johnny To and used with permission of the author. All other rights reserved by the author.

On Exploratory Testing and ‘Playing About’

A thought that occurred to me recently while following procedures for testing using the boundary value testing method is, “What would happen if I tried values for inputs of an incorrect type rather than whatever values the particular testing method called for?” While simple test procedures such as normal boundary value testing clearly leave much to be desired, more comprehensive testing methods such as robust equivalence class testing seem to cover most of the potential sources of input errors, right? I think not. It was often difficult to fight my intuition and desire to choose test values that did not necessarily line up with the testing procedures covered in boundary value or equivalence class testing procedures. This desire to choose what to test and how to test it based simply on experience and intuition is whats known in the software quality assurance field as exploratory testing, and it is an extremely powerful method of testing when used properly and appropriately.

In a Let’s Talk About Tests podcast titled “Players gonna play play play play play,” from September 21, 2017, Gem Hills talks about the benefits of getting familiar and gaining a better understanding of a program simply by “playing about,” with the program itself. Hills argues that sometimes simply using the program and applying knowledge from previous experiences with similar programs or with the development team itself can often provide a software tester with potentially revealing test cases that would be missed if the tester was relying solely on a testing framework. In her post, Hills references Richard Bradshaw, perhaps better known as the “Friendly Tester.” Specifically, Hills talks about attending one of Bradshaw’s talks at the BBC in August where he spoke about how “having a play” with a program can become far more impactful by simply recording the results and tests, effectively turning this play into exploratory testing.

One of the reasons that Hills cites for playing with a program with no real agenda is that, “[She] is not confident that [she’s] tested everything but [she’s] not sure what [she’s] missing.” I found this point especially telling. This is exactly how I felt when I was writing test cases for the boundary value testing assignment, as if I was sure that I was missing something, but unsure as to exactly what it was that I was missing. It would therefore seem to make sense to me that I should begin playing around in an attempt to discover what it is that I am missing. Perhaps it is simply that I am lacking the appropriate testing method, and performing some exploratory testing provides insight into an effective method. It is possible, however, that a bug exists in the program that cannot be found by applying any number of testing procedures. In these cases, exploratory testing, or “playing about,” is the only effective testing tool.

From the blog CS@Worcester – ./George Matthew by gmatthew and used with permission of the author. All other rights reserved by the author.

WSU Blog #2 for CS-343

URL: http://blog.gatherspace.com/2011/08/10-tips-a-use-case-template/

This blog provides ten approaches to common tasks when using UML based design.

From the blog Rick W Phillips - CS@Worcester by rickwphillips and used with permission of the author. All other rights reserved by the author.

Top 5 Common Myths in Software Testing

I read the post “Top 5 Common Myths in Software Testing”,  this is a somewhat old post, but all of the myths mentioned inside it are still going around to this day.  The 5 myths that the author debunked in order were:

  1. Software Testing is a Mundane No-Brainer Job
  2. A Tester Should Be Able To Test Everything
  3. A Tester’s Job is to Find Bugs
  4. Testers Add No Value to The Software
  5. Test Automation Will Eliminate Human Testers

For the first myth, the author argues that the job is only mundane and boring if you are doing it wrong. He argues that testing cannot be boring if you see the process as information gathering and not just running routine tests looking for bugs.  For the second myth he argues correctly that testers are limited by a variety of different factors such as time, or inability to test certain cases because of the lacking infrastructure. The author makes it clear that while one of the key roles of a tester is to find bugs, it is not their only, (and this also leads to the next myth) testers ensure that the product actually meets the requirements, make sure the product is user friendly, etc. He also explains why he believes that Automation will never replace human testers, and that reason is emotion. Essentially he believes that no matter how advanced Automation becomes it will never have that human perspective.

The reason that I chose this post is because I had some preconceived notions about testing that were quickly proven wrong when I did an internship as a tester. I wanted to see if the things that I believed  before were any of the common myths inside this post. After reading the article I realized that I believed all the these myths before my internship experience.  The biggest one for me was “A Tester’s Job is to Find Bugs”, while this was true, we also had many more responsibilities. One thing that surprised me was the strict standards that the testers had for documentation, they took this REALLY seriously! If an API was not extensively documented, that would be considered a severe bug. That level of detail showed me that testing really isn’t just finding bugs, but also making sure that the product simply works, and most importantly is easy to use. This article really hit home the point for me.

It can be found at: http://www.softwaretestingtricks.com/2012/04/top-5-common-myths-in-software-testing.html

 

From the blog CS@Worcester – Site Title by lphilippeau and used with permission of the author. All other rights reserved by the author.

Writing “Clean” Code

https://www.thecodingdelight.com/write-clean-code/#ftoc-heading-2

As I learn more about computer science, I find that writing code for a programming project is very similar to writing an essay for an English project. I find that the core structure of a coding language mimics the core structure of any other language. A run on sentence can be compared to poorly using indentations, grouping your variables together at the top of the page is like having a well organized thesis statement. Just as we spend years taking English courses perfecting our grammar and sentence structure, it takes a lot of practice and learning to perfect our syntax and ability to write clean code.

Some benefits of writing clean code include:

  • Being able to easily follow your own code to track down errors
  • More efficient productivity
  • A more professional reputation
  • Your employer/teacher will resist the urge to strangle you

This article explains in detail why we should try to better our code and how to develop better habits from common mistakes. I for one, typically write messy code just hoping it will work and later try to buff out bad variable names, overly complicated methods, and half written comments. After reading this article however, I’ve learned tips and tricks for writing better code and how to embrace the clean code mindset. I can only imagine how many frustrated programming teachers’ evenings I’ve saved after reading this article, so for your own sake and for the sake of whoever has to look at your code, I recommend you read this article as well.

From the blog CS@Worcester – CS Mikes Way by CSmikesway and used with permission of the author. All other rights reserved by the author.

Writing “Clean” Code

https://www.thecodingdelight.com/write-clean-code/#ftoc-heading-2

As I learn more about computer science, I find that writing code for a programming project is very similar to writing an essay for an English project. I find that the core structure of a coding language mimics the core structure of any other language. A run on sentence can be compared to poorly using indentations, grouping your variables together at the top of the page is like having a well organized thesis statement. Just as we spend years taking English courses perfecting our grammar and sentence structure, it takes a lot of practice and learning to perfect our syntax and ability to write clean code.

Some benefits of writing clean code include:

  • Being able to easily follow your own code to track down errors
  • More efficient productivity
  • A more professional reputation
  • Your employer/teacher will resist the urge to strangle you

This article explains in detail why we should try to better our code and how to develop better habits from common mistakes. I for one, typically write messy code just hoping it will work and later try to buff out bad variable names, overly complicated methods, and half written comments. After reading this article however, I’ve learned tips and tricks for writing better code and how to embrace the clean code mindset. I can only imagine how many frustrated programming teachers’ evenings I’ve saved after reading this article, so for your own sake and for the sake of whoever has to look at your code, I recommend you read this article as well.

CS 343 Blog – New Theory on Deep Learning – 9/25/17

https://www.quantamagazine.org/new-theory-cracks-open-the-black-box-of-deep-learning-20170921/

This article from Quanta Magazine details a new theory on how the artificial intelligence algorithms behind deep neural networks are so successful. The theory, presented by computer scientist and neuroscientist Naftali Tishby, argues that deep neural networks learn by a procedure called information bottleneck. The theory states that a network gets rid of extraneous details in input data and only retains details most relevant to general concepts. In experiments, Tishby tracked how much information each layer of a network retained about the input data and output label. He found that layer by layer, the networks converged to the information bottleneck theoretical bound. This is the theoretical limit that represents the absolute best the system can do at extracting relevant information. Tishby’s discovery is an important step in figuring out exactly how neural networks work and gives a better understanding of which kind of problems can be solved by real and artificial neural networks.

I selected this article because artificial intelligence is an important and growing field in computer science. It will also have profound effects on society as systems like deep neural networks become more advanced. I think that as artificial intelligence continues to evolve it will become more important as a software developer to understand how A.I. systems work.

I thought that this article was very well written and did a good job of making a very complex subject understandable. From this article I learned the basics of how a deep neural network works. I think it is very interesting how the design is inspired by the architecture of the human brain. I also thought it was interesting how we aren’t even sure exactly how deep neural networks learn, and the information bottleneck theory that is detailed in the article could end up being a very important advancement in this technology. Naftali Tishby’s quote in the article summarizes the basic idea of this theory: “The most important part of learning is actually forgetting.” Tishby believes this is a fundamental principle behind learning whether you’re an algorithm or a conscious being. I think this is a very interesting viewpoint on learning that I did not know about before this article. Tishby believes that deep neural networks learn by receiving a large amount of input and then “forgetting” information that is not relevant until there is an optimal balance of accuracy and compression in the final layer. Overall I think this idea is very intriguing and is something that I would like to learn more about.

From the blog CS@Worcester – Computer Science Blog by rydercsblog and used with permission of the author. All other rights reserved by the author.

B2: Automatic Code Transplants

Automatic Code Reuse

            This week I found another MIT article that piqued my interest which explains the modifications a system makes to “transplant” code from one program to another. The article explains that the Computer Science and Artificial Intelligence Laboratory at MIT develop a system where programmers can copy code from one program and using an insertion point, move it over to another program. The interesting part is that the system will make any modifications necessary to integrate the code into the new program like change variable names. They tested CodeCarbonCopy on many open-source image-processing programs with seven of eight transplants working successfully. For this system to work, both programs need the same input file and then the system compares how the programs process the file. Once it finds similarities between variables, it first displays those similarities to the user and then displays the variables that were different. The user then can decide to flag certain variables that are not needed for the transfer and the system will excise any operations that use those variables. The system will also look at precise values in memory that both programs store and uses those values to generate a set of operations that translate the values into a data map. The article also explains that this program is still basic in usability but it does work well with file formats that follow a rigid organization that is easy to read. The researchers are trying to generalize the approach of this software to further allow multiple file formats so code from different file formats can be transferred to one another.

          I chose this article because it really shows the development within AI and computer science today. This option of automatic code reuse will make programs easier to understand and create. The ability to copy code from one program to another is incredible but the best part is that the system can now understand what each variable value means and display the similarities between two programs. This would also allow the system to make a program in multiple languages by reconfiguring the values into the desired language. I think that this jump in technology really can make life easier for programmers but it also brings up the point of stealing code from others. If this program ends up doing what it was originally designed to do then people can take code off the internet and use this program to change it into a new program that they could call their own. That aside, it would be interesting to see how the computer would take on the challenge of creating the software architecture between different languages as the syntax and naming system are different based on the language used. The article showed me that AI technology is really starting to kick off now and about the relationship two programs can have based on the values they store within memory.

From the blog CS@Worcester – Student To Scholar by kumarcomputerscience and used with permission of the author. All other rights reserved by the author.