Category Archives: Week 10

Coding vs. Hacking: What Do You Really Need to Know?

This week, I will be talking about the differences between coding and hacking, some of the confusions associated with them, and what your skill set should look like if you are learning about being a hacker. One question that will receive lots of attention is, “Do I need to learn coding to become a hacker?”

Let’s jump right into this. First, it’s important to note that coding and hacking are closely tied, but have important distinctions. Coding is the act of writing machine instructions, or code for a computer, which can be done in many different languages. Hacking, on the other hand, is the act of identifying and exploiting weaknesses in a computer system or network, usually to gain unauthorized access to personal or organizational information (to put it simply, you are breaking in). Hacking is not always a malicious activity, and there are actually several examples where hacking is used for good, like with penetration testing. Unfortunately, the term has garnered mostly negative connotations for its association with cyber-crime. It is important to remember that hacking is a tool–where it is not the tool that matters, but rather the intention of its user (what they wish to do with the tool).

Without a doubt, coding is a prominent part of hacking that has helped shape what it looks like today. If you are trying to learn about hacking, or are interested in taking part yourself, you would likely be doing yourself a disservice by having little to no prior knowledge of coding, because of how intertwined they are–but it is not absolutely necessary. In fact, there are multiple forms of hacking that require little to no coding skill. For example, social engineering is a type of hacking that focuses on the social, human aspect of security rather than the technical aspects. These attacks rely on human nature rather than code, and aim to manipulate people into compromising their personal security, or even the security of an entire network or organization they may be a part of.

In the podcast, Chuck raises an interesting question about having basic, fundamental knowledge of coding (specifically mentioning functions and classes) and asks if it’s really necessary to go much further than that if you are trying to become a hacker. John responds, “You don’t need to go much further beyond that. When people ask [that] question, I always say no, but with a disclaimer that you should learn some programming, but you don’t need to learn absolutely everything… I am not by any means a software engineer or architect, but I can script; I can write a loop that might brute-force passwords… and you don’t need to know a lot of hardcore, complex programming concepts for that. You just need to know the basics for that.”

John makes some very good points as the show continues, and focuses on how a lot of the basic, rudimentary skills in programming are often the ones that require the most practice, because of their importance, and because of how frequently they are used to build more complex pieces of code. He believes that the best way to get that practice is by immersing yourself in the world of hacking and trying to solve those problems with the skills that you have.

In conclusion, coding and hacking, despite being so closely intertwined, have some very distinct differences, and as it turns out, you may not need to know as much as you think you do about coding in order to start learning about hacking or becoming a hacker yourself. Although you may not need to know everything there is about programming, having some rudimentary knowledge is really all that it takes for you to start and branch out from what you have.

This episode can be watched in full, for free here on YouTube: https://www.youtube.com/watch?v=T7AaBcNj-mA&t=0s

From the blog CS@Worcester – Owen Santos Professional Blog by Owen Santos and used with permission of the author. All other rights reserved by the author.

Misconceptions with OOP

I chose the blog post, “People Don’t Understand OOP” by Sigma because it addresses recurring challenges in programming, specifically around OOP. Understanding how to improve my approach to OOP principles would help me write cleaner, more effective code that’s easier to maintain and adapt over time. Not following these concepts has led to messy code in previous years. Personally, I feel like I made a lot of these mistakes when I first started coding, however as more classes have gone by I have been able to break some of these bad habits. Unfortunately, there are times when I will make these mistakes when coding without thinking so that is what led me to choose this blog post, so I can learn how to not make these mistakes in the future.

The blog post  explores common misunderstandings surrounding Object-Oriented Programming (OOP). The author, Sigma,  argues that misconceptions often stem from oversimplified metaphors and an incomplete grasp of fundamental principles like encapsulation and abstraction. Frequent mistakes include equating OOP with buzzwords like inheritance and getters/setters, while neglecting its core concepts such as bundling related state and behavior into cohesive units (objects) and minimizing dependencies through proper encapsulation.

The post highlights that encapsulation is not merely about hiding internal state but about reducing interdependencies and ensuring modularity. Public properties, often critiqued for exposing internal states, are likened to getters and setters in their inability to prevent object coupling. The author points out that real-world OOP is much more nuanced, involving trade-offs that depend on the problem domain and language constraints. A detailed comparison of popular languages, including JavaScript, Python, Rust, and Go, demonstrates varying implementations of OOP features like inheritance, subtyping, and encapsulation. 

From the article, I was able to rethink my use of OOP principles and highlighted that simplicity and adaptability should be the goal when programming. Going forward, I plan to be more thoughtful about whether OOP concepts like inheritance are necessary or if simpler, more flexible design choices would work. The article taught me that well-designed OOP should evolve naturally rather than forcefully adhering to principles. This perspective will help me develop solutions that adapt to change more easily, making my work in software development more efficient and adaptable. After reading, I feel like I will be able to not make as many mistakes that lead to inefficient use of OOP creating a better and more efficient workflow when coding.

From the blog CS@Worcester – Giovanni Casiano – Software Development by Giovanni Casiano and used with permission of the author. All other rights reserved by the author.

Blog Post Week 10

This week, I found an article questioning the relevancy of UML as Agile development processes are becoming more and more used today. This was an interesting article for me to read as it points out some of the flaws with trying to use UML while developing software with Agile, but also a couple of potential ways it can still be useful. The overall notion I got from reading this article was that UML has it’s time and place, but it is certainly becoming a thing of the past as developers start to lean more toward an Agile development process.

The article states that in an Agile environment requirements are typically not defined in detail prior to starting the project, as well as the design of the software. As the project progresses, the requirements and design will usually evolve over time. Also, formal documentation during a project isn’t as much of a necessity while going through an Agile development process. These three big components of Agile clash quite a bit with trying to implement UML as UML pretty much requires the exact opposite of these three components. Basically, you can’t define the design and details prior to the project because of insufficient information which you would need with UML, and it would also need a lot of effort to update the UML diagrams as the project evolves.

There are a couple ways UML diagrams could still be useful as listed in the article as well:

Once development is completed, it may be helpful to use UML diagrams to support the system. Using UML to define and standardize the architecture would also be another reason to still practice it.

I think, as I stated earlier, there is a time and a place for using UML. Obviously, if you haven’t adopted an Agile development process then UML could still be on the table to use; most people do use Agile now though. Because of this, you can certainly still use UML diagrams, but if you want it to be in an efficient manner, it should be used in one of or both of the ways mentioned directly above. It’s still great to have the knowledge of most certainly, as UML from what I’ve read and heard other people speaking isn’t completely outdated, but it definitely isn’t going to be something worth stressing over if you’re not the most knowledgeable on how it works.

From the blog CS@Worcester – RBradleyBlog by Ryan Bradley and used with permission of the author. All other rights reserved by the author.

Testing…Testing…

This week, I have selected a blog about the concept of Software Testing as this is a topic of focus in our course. Upon reading this article it became very clear to me that – although I have used unit tests and other simple strategies – software testing has many important aspects that I am not familiar with. The post titled, “Software Testing 101: Get started with software testing types” was written by The Educative Team for their blog Dev Learning Daily which can be found here.

This blog is able to highlight the many different software testing methodologies and cycles that are used by developers throughout the development life cycle. At a high level, software testing is used to evaluate/correct program functions, ensure that the build meets the customer requirements, and confirm that integration of the software is possible/compatible with other components and other systems. Most of us are familiar with the reason we must test our software prior to production, but knowing how to test completely and comprehensively is the most vital aspect.

The post touches on Black Box vs White Box testing, Automation vs. manual testing, Functional testing methodologies, Non-functional testing methodologies, and some useful general information and best practices related to the software testing lifecycle. One topic that stuck out to me was the difference between functional and non-functional testing and the processes each follows. I think that the majority of my testing experience (if not all) has been rooted in functional testing even if I did not know it at the time. From this post, I have learned that functional testing has a cycle within itself focused on testing specific program behaviors and the process starts with unit testing to test small components of a program, then to integration testing to ensure components can work together, then system testing to ensure a full build is functioning properly, and finally acceptance testing with alpha testing being completed with internal users and beta being completed with external parties to get additional feedback without bias. There are many other types of testing mentioned that I had zero experience with, but after learning about them I am looking forward to when and how I can begin to use these new tools to help me write useful code.

Our projects can benefit on many different levels by implementing testing in their development cycle like ensuring minimal user experience conflicts and meeting customer expectations of completely functional requirements. I was able to learn about the many different kinds of testing that exist, in what circumstances they should be used, and how to implement them to get results in a real situation. The writers discussed the process for testing which I think I can summarize very simply as:

  1. Determine what needs to be tested
  2. Create a test case
  3. Check result – Success? Move on! vs. Error? Solve it!

We can acknowledge that testing can become much more advanced than these steps, but the value gained makes it worthwhile.

From the blog CS@Worcester by cameronbaron and used with permission of the author. All other rights reserved by the author.

Studying Fundamental Design Principles in Java

This week, I am sharing a blog post related to software design principles written by Alex Klimenko, a well-versed Java developer with experience in building applications, performance optimization, and multithreading. This blog was selected based on the relevance to our course topics and the relation being drawn to Java. Please consider reading Alex’s full blog here.

Klimenko’s post covers three main principles DRY, KISS, and YAGNI, all of which are vital for avoiding common coding pitfalls and ensuring that we are writing code that others will use and that our future-selves can be proud of. The writer’s description of these principles allowed me to grasp the concepts on a deeper level and learn about how they can really make a difference when properly implemented.

The DRY principle is an acronym representing the phrase “Don’t Repeat Yourself” which Klimenko also equates to “Do It Once”. This blog explains this concept by focusing on the root, which is that no repetition leads to less code, resulting in less errors, which means that the code, itself, is easier to maintain and update throughout time. Klimenko encourages the use of encapsulated utility classes and methods for common tasks, the use of polymorphism and inheritance to avoid duplicating code, and employing design patterns like the Template Method to refactor common behaviors.

The KISS principle can be spelled out as “Keep It Simple, Stupid” and really sticks out to me as I know I have encountered issues in this realm previously. This principle has the goal of favoring straight-forward solutions over unnecessarily complex solutions. To do this in Java, Klimenko provides many implementations ranging from encouraging clear and concise naming conventions, following Java’s best practices such as those from Java Code Conventions and Java Language Specification, and making use of standard libraries and frameworks.

The YAGNI principle represents the phrase, “You Aren’t Gonna Need It” which, in software development, equates to the concept of avoiding adding functionality/complexity to code until it is required by the customer’s current specifications. The writer phrases this as avoiding “speculative development” or thinking about future needs that are not yet even in existence. A few of the ways Klimenko relates this principle to Java is by encouraging a minimalistic class design, lean dependency management, and avoiding premature optimization.

After reflecting on all that this blog has helped me to understand, I have become much more aware of some of the poor qualities that likely exist in my own code, but I also feel like I am prepared to begin attempting to apply these concepts in my work so that I can gear myself towards becoming a better programmer with sound principles. Mastering these principles will take years of practice, but to begin applying them now, as a student, I can ensure that my approach to coding and the implementation will constantly be improving.

From the blog CS@Worcester by cameronbaron and used with permission of the author. All other rights reserved by the author.

SOLID

SOLID is an acronym for a set of design principles for object-oriented programming that is used to help developers create flexible, efficient, and easily maintainable software. I think the article “SOLID: The First 5 Principles of Object Oriented Design” by Samuel Oloruntoba and Anish Singh Walia does a great job explaining the SOLID principles. I plan on using these principles in the future to further develop my code and to make sure it is easy to read and understand.

Here is what each of the letters in SOLID stand for:

S: Single Responsibility Principle (SRP)
O: Open-Closed Principle (OCP)
L: Liskov Substitution Principle (LSP)
I: Interface Segregation Principle (ISP)
D: Dependency Inversion Principle (DIP)

SRP states “A class should have one and only one reason to change, meaning that a class should have only one job.” Essentially this means that any object or class should be made for one specific function in order to better understand the code.

OCP states “Objects or entities should be open for extension but closed for modification.” The article states that “This means that a class should be extendable without modifying the class itself.” I think this article does a great job explaining what all of the different principles mean as well as giving examples for each of them. I strongly recommend using this website if you’re trying to learn about SOLID.

LSP states “Let q(x) be a property provable about objects of x of type T. Then q(y) should be provable for objects y of type S where S is a subtype of T.” At first, this really confused me. After doing some research and coming across this website, I began to understand this principle more. In simpler terms, it means that every subclass should be a substitute for their parent class. If you need examples to see how this would work, I highly recommend looking at the linked article.

ISP states “A client should never be forced to implement an interface that it doesn’t use, or clients shouldn’t be forced to depend on methods they do not use.” This principle states that software should be broken down into smaller, more specific parts. This is so it doesn’t depend on code it doesn’t use.

DIP states “Entities must depend on abstractions, not on concretions. It states that the high-level module must not depend on the low-level module, but they should depend on abstractions.” In simple terms, both high-level and low-level modules should depend on abstractions, and abstractions should not depend on details.

Link to article: https://www.digitalocean.com/community/conceptual-articles/s-o-l-i-d-the-first-five-principles-of-object-oriented-design#dependency-inversion-principle

From the blog CS@Worcester – One pixel at a time by gizmo10203 and used with permission of the author. All other rights reserved by the author.

Response Codes

In class after learning about a few select HTTP response codes, I wanted to look into the whole library of possible codes to get a better understanding of how website calls work and the potential errors that come with them. The blog I chose to read from gave a brief introduction to why knowing the meaning of the response codes is important for managing or using a website. Before going into the specific definitions of each code, the author states the main takeaways at the beginning of the article which helps the reader know what to look out for as they read ahead.

The codes are representations of the types of responses between the web server and the browser. Every time you use a new URL an HTTP code is generated. The author goes on to explain how making sure you have successful HTTP codes is a good way to promote a website because search engines use the HTTP response codes to determine if that URL will show up as a result.

Next is the part of the article that shows how the first of the three digits are grouped and defined, which I didn’t know and is helpful to know. Some that we didn’t go over in class was 100 codes that are for Informational responses and 300 for Redirection. There is then a reference table provided that gives the corresponding code and definition for each code. The author then goes into more detail about how search engines use these codes to determine what pages get recommended to users.

I think that for myself and how I tend to learn best is by looking up libraries of every possible response/function/use for something and deepening my understanding of a topic and knowing how it works and why it was made that certain way. It will also be helpful as both a developer and a user because now when I see an HTTP response code, I will know what it means and what I would need to do to fix or get around the problem. I also learned how important HTTP response codes are for increasing your website traffic and another reason to have efficient web code for something that I wouldn’t have thought of. Doing outside of class and self-directed research on class topics is very helpful to connect different topics together as well as how they relate to work that is done in the field

Common HTTP Response Codes Explained – Neil Patel

From the blog CS@Worcester – Computer Science Blog by dzona1 and used with permission of the author. All other rights reserved by the author.

How Spikes Can Help Scrum Teams Navigate Complexity and Uncertainty

In the world of Scrum, navigating uncertainty and complexity is a key challenge. One practice that helps teams manage this is the use of “spikes”—a technique for addressing uncertainty by dedicating time to research, exploration, or experimentation. A blog post titled “Navigating Uncertainty: Crafting Effective Spikes in Scrum” provides a detailed examination of how to create and manage spikes effectively in Scrum. The post breaks down the concept of spikes, offering clarity on when and how to implement them in a way that maximizes value while minimizing disruption to the team’s flow.

A spike is essentially a time-boxed period where a team focuses on gathering information, resolving technical debt, or experimenting with a new technology, rather than delivering functional product increments. This is essential for reducing risk and uncertainty, particularly when the team faces unknowns that could impact the project’s success.

I selected this blog post because it directly relates to Scrum’s emphasis on adaptability and continuous learning. Understanding how to manage spikes effectively is crucial to achieving that adaptability. It complements key elements of the Scrum framework covered in our class, especially the Sprint Cycle and the role of the Product Owner in prioritizing work. In a Scrum environment, having a strategy for managing uncertainty aligns with the framework’s focus on iterative progress, continuous improvement, and adaptability.

Reading the blog post helped me better understand how spikes function within Scrum. One important takeaway is that spikes are not a sign of poor planning but rather a proactive strategy for tackling uncertainty. In previous projects, I often found myself overwhelmed when faced with unknowns.

The concept of spikes is a useful technique within Scrum for managing uncertainty. Reducing risk, improving decision-making, and maintaining focus on delivering valuable increments of work; ensuring that the Sprint Cycle remains productive and focused on achieving the Sprint Goal.

For more information, you can read the original blog post here.

From the blog SoftwareDiary by Oanh Nguyen and used with permission of the author. All other rights reserved by the author.

Blog Post VERS. 4.1.63

 Greetings! 

This week during class, among other things, we learned about semantic version numbers. As the name implies, the process of determining what kind of version goes to what kind of version number is quite complicated, and requires some level of thought, which I admittedly hadn’t done before now. The MAJOR.MINOR.PATCH format for changes does seem rather useful and straightforward, but actually figuring out how to classify changing how a print command prints to the console seems like a lot of work. I always had this notion in mind that developers kind of just picked versions numbers at random, or at least sequentially, I didn’t know there was an actual structure behind what appears to be a simple string of numbers.

I feel like with writing these posts I have a tendency to view other blog posts that completely contradict, or speak about the shortcomings of what we learn in class. I don’t mean to be a cynic, I just want to be aware of what can go wrong when using such a structured method of formatting. That being said, I viewed “Semantic Versioning is a terrible mistake”, from the Reinvigorated Programmer, which is a personal blog of a career programmer and hobby archeologist. While I don’t fully agree with the overly cynical title of the article, I do believe it makes some very valid points. Within this article, the writer speaks about the problems with having numbered releases for software, as it makes it so programmers can make frequent breaking changes to software, which are denoted by a version number. Instead of having a real “Major Release”, it’s really just an excuse to release a small breaking change and release it as it’s own version. I can see obvious problems with this, such as the tedium of upkeep and maintenance. While I haven’t worked with many of these constantly changing APIs in my school programming career, I can certainly relate to the struggle of dealing with a constant influx of new versions. To anyone that has tried to play the video game Minecraft, and attempted to mod said game, you know how difficult it can be to make sure everything is working with the same base version of the software. 

Overall this article was pretty good! I enjoyed the semi-comedic tone of the author, and it feels a little less dry than some of the other more technical blogs I’ve viewed in my time. In terms of semantic versions, I am glad I’ve taken the time to look into it further, as I think it’s helped me clarify what the differences are between the different numbers, and what it means to release a major version following this numeric scheme. So the next time I use a piece of software that has some history to it, and is on version 21.3.56, I can smile in satisfaction at the fact that I know what that means, but also grimace at the fact that, inevitably, software will break. Eugh.


Article Link:https://reprog.wordpress.com/2023/12/27/semantic-versioning-is-a-terrible-mistake/

From the blog Camille's Cluttered Closet by Camille and used with permission of the author. All other rights reserved by the author.

Blog Post VERS. 4.1.63

 Greetings! 

This week during class, among other things, we learned about semantic version numbers. As the name implies, the process of determining what kind of version goes to what kind of version number is quite complicated, and requires some level of thought, which I admittedly hadn’t done before now. The MAJOR.MINOR.PATCH format for changes does seem rather useful and straightforward, but actually figuring out how to classify changing how a print command prints to the console seems like a lot of work. I always had this notion in mind that developers kind of just picked versions numbers at random, or at least sequentially, I didn’t know there was an actual structure behind what appears to be a simple string of numbers.

I feel like with writing these posts I have a tendency to view other blog posts that completely contradict, or speak about the shortcomings of what we learn in class. I don’t mean to be a cynic, I just want to be aware of what can go wrong when using such a structured method of formatting. That being said, I viewed “Semantic Versioning is a terrible mistake”, from the Reinvigorated Programmer, which is a personal blog of a career programmer and hobby archeologist. While I don’t fully agree with the overly cynical title of the article, I do believe it makes some very valid points. Within this article, the writer speaks about the problems with having numbered releases for software, as it makes it so programmers can make frequent breaking changes to software, which are denoted by a version number. Instead of having a real “Major Release”, it’s really just an excuse to release a small breaking change and release it as it’s own version. I can see obvious problems with this, such as the tedium of upkeep and maintenance. While I haven’t worked with many of these constantly changing APIs in my school programming career, I can certainly relate to the struggle of dealing with a constant influx of new versions. To anyone that has tried to play the video game Minecraft, and attempted to mod said game, you know how difficult it can be to make sure everything is working with the same base version of the software. 

Overall this article was pretty good! I enjoyed the semi-comedic tone of the author, and it feels a little less dry than some of the other more technical blogs I’ve viewed in my time. In terms of semantic versions, I am glad I’ve taken the time to look into it further, as I think it’s helped me clarify what the differences are between the different numbers, and what it means to release a major version following this numeric scheme. So the next time I use a piece of software that has some history to it, and is on version 21.3.56, I can smile in satisfaction at the fact that I know what that means, but also grimace at the fact that, inevitably, software will break. Eugh.


Article Link:https://reprog.wordpress.com/2023/12/27/semantic-versioning-is-a-terrible-mistake/

From the blog Camille's Cluttered Closet by Camille and used with permission of the author. All other rights reserved by the author.