Category Archives: Week 11

The Best Linux Distro to Learn to Become a Hacker

This week, I will be talking about the “best” Linux Distro you should learn in order to get the most out of your hacking. This topic is widely debated and you will get many different answers from asking around, with people claiming different distros to be the “best.” Although that title can be slightly arbitrary, there are some specific distributions of Linux that are objectively better or overall more suited for hacking.

Kali Linux is often regarded as one of, if not the best distros to learn for hacking, because it was specifically designed for digital forensics and penetration testing. Developed by Offensive Security, this Debian-based distro remains a favorite among coders and hackers, and comes loaded with security testing tools, powerful programs, and applications that make life easier for people who want to become a hacker (or are one already). Although it can be a bit overwhelming, Kali is extremely helpful for beginners, because all of these tools are laid out for you and it helps you learn right away how to use them and what their capabilities are.

In the podcast, John mentions that he typically uses Ubuntu, and that he has people who ask, “John, why are you using Ubuntu when you could have been using Kali or Parrot OS?” He responds, “I think it’s really valuable to learn how to install those tools, learn how to configure those tools, watch, and see them break–because then you’ll be able to figure out how to fix them, and you’ll be able to troubleshoot and understand what are the packages, what are the repositories, how does this all work within Linux.” He believes that getting through the learning curve is worth it because it will ultimately be good for your own learning and growth. At the end of the day, each distribution of Linux is going to have its own strengths and weaknesses, and will be a little different from each other. Having knowledge and experience about these tools will allow you to use them when solving problems and will make you a better hacker.

I have never experimented with Kali Linux before, but I do have some experience when it comes to learning about hacking through Linux. There is a special, custom-made distribution of Linux called Tails, and like Kali, it is based on Debian. However, there is a very big difference between these two distros; while Kali seems to focus more on offensive hacking, Tails is more defensive and prioritizes both privacy and security through its unique interface. It is made to be booted as a live DVD or USB and never writes to the hard drive or SSD, instead using RAM, and leaves no digital footprint on the machine (unless told otherwise).

In conclusion, Linux is still somewhat unfamiliar to me, as I only have limited experience. I would like to learn more about Kali Linux in particular, but would also like to explore other distributions and learn about their potential.

Watch it here:
https://www.youtube.com/watch?v=T7AaBcNj-mA

From the blog CS@Worcester – Owen Santos Professional Blog by Owen Santos and used with permission of the author. All other rights reserved by the author.

Managing a product backlog within Scrum

With an honors project coming up for one of my courses I was going to have to learn how to become a single person Scrum team. With the average scrum team being seven to ten people, I knew it was going to be both a strange and difficult task.

I knew my first order of business would be to create a product backlog as I am the product owner (among many other things being the only member of the team). Diving in headfirst, I knew what a product backlog was but not how to set up an effective one.

Thankfully, “A guide on Scrum product backlog” by Brianna Hansen was the perfect blog to stumble across. She eloquently states what a product backlog is, why one should be maintained throughout a project, and how to create a product backlog geared towards success. As an added bonus the end of the blog even provides a platform to create and maintain a product backlog.

As I previously stated, I’ve known what a product backlog is. It’s everything that needs to be done for a product, including maintaining it. As much as a product backlog is a to-do list, one way to increase success is to not overload it. Keep it simple but effective. No one on the Scrum team (in this case me) wants to scroll through a product backlog for hours.

Time management is crucial for a product backlog. Certain items contained in the backlog are going to be more time consuming than others so considering this when putting product backlog items into the sprint backlog is very important to sprint success.

Defining the product vision is one of the major points she gives for maintaining a successful product backlog. This usually involves the whole team getting involved to make sure the vision for the product is shared. While in my case I may be the only member Hansen does give some very important questions for me to ask myself when planning my product and adding items to the backlog.

  • “What problem does the product solve?”
  • “Who are the target users or customers?”
  • “What unique value does the product offer?”

Taking these questions into consideration will help to guide me through this project and help to increase my chances of success.

Finding this blog was incredibly helpful for taking my first steps into trying Scrum firsthand and I intend to use what I learned as I navigate my honors project.

From the blog CS@Worcester – DPCS Blog by Daniel Parker and used with permission of the author. All other rights reserved by the author.

Blog Week 11

This week, I found an article that actually only recently came out about 8 days ago. The article, titled “Academic papers yanked after authors found to have used unlicensed software”, dives into two authors who had their paper retracted from an academic journal because of their use of a software in which they did not obtain a license from.

The authors used a software called “FLOW-3D” to help them calculate results exploring dam failures. When the parent company, appropriately named “Flow Science” found out about this, they made a complaint and were quickly able to have the papers taken down by  Elsevier’s Ain Shams Engineering Journal. The journal’s editor-in-chief even stated: “One of the conditions of submission of a paper for publication is that the article does not violate any intellectual property rights of any person or entity and that the use of any software is made under a license or permission from the software owner.”.

I find this to be an important issue that should be certainly talked about. This semester, we already learned of the importance of obtaining these licenses for a multitude of reasons, but I don’t believe we saw any real life examples of consequences if you didn’t. This article shows that, especially if required by a certain company, journal, or any sort of entity, using software or creating software and not properly getting a license for it can have repercussions. While I’m unsure if I’ll be writing an academic paper in the near future, I imagine at some point, if I further my education, I will. And if I do, and I need to use software in a similar manner to the people in the article used it, I will certainly make sure I am not breaching any sort of guidelines or laws set in place when it comes to the software licensing.

The article also mentions that, in 2023, over 10,000 research papers were retracted, which is a new record. I’m unsure if these were all due to licensing issues or if other issues were involved as well, but it still proves the point that it is taken very seriously and you can face the repercussions if you are not careful.

There are also many other types of consequences you may face by not getting licenses when it is necessary, but I think this article may be a good one to read especially if you plan on writing a paper or study at some point with which you used software to help solidify your paper. It is always good to be aware of what you need to do to be able to appropriately write a paper first, so you don’t have to deal with potential consequences if you don’t.

From the blog CS@Worcester – RBradleyBlog by Ryan Bradley and used with permission of the author. All other rights reserved by the author.

The Importance of Software Licenses

We spent a lot of time in class going over the different types of software licenses and the reasoning for why they are necessary, so I figured that it would make sense to dig deeper into the topic.

The blog that I selected introduces software license management as a way to strategically optimize and manage the usage of software within a company or organization. The blog post defines software license management (SLM) as the process of ensuring compliance within software licensing agreements but also minimizing any overspending and inefficiency throughout the entire process. The blog post also mentions the reasons for why SLM matters, as it points to the overall operational benefits that comes with proper SLM as well as the general legal and financial risks that are associated with non-compliance. Throughout the blog, the various types of licenses are all mentioned and explained, such as open-source, copyleft, and permissive licenses.

I selected this blog post because of its relevance to what we learned in class, as it gave a refresher on software licenses and also gave further elaboration on the overall subject. This blog also shows why software license management is important to a company, as it often leads to an increase in overall operational efficiency as well as compliance efficiency. This could be useful for me in the future as this article explained the importance of SLM in a workplace environment, which is very useful context to have.

This article gave me a larger understanding of software license management and the benefits of it. It also helped clarify my understanding of how companies tailor their strategies based on their operational needs, as the differentiation of software license types made this clear to me. I was very intruiged by the risks of not having SLM, as this can lead to a lot of inefficiencies as well as costly penalties for an organization, which is something that I never really considered thinking about. At the end of the article, it shows and lists a bunch of automation tools for SLM, which I found very interesting. This made me think about how technology can really simplify complex tasks, which can be both beneficial and detrimental at the same time for my future. For my future, I intend on applying this knowledge of software licenses and SLM potentially in future workplaces. To me, the benefits outweigh the negatives as implementing SLM will only increase efficiency within the company, which I think is generally a goal for most companies. Overall, this blog post did a great job at explaining the different types of software licenses, and showing me why software license management is so important, as well as why it is necessary when it comes to overall efficiency within an organization.

Source: https://whatfix.com/blog/software-license-management/

From the blog CS@Worcester – Coding Canvas by Sean Wang and used with permission of the author. All other rights reserved by the author.

Week 11 Blog Post

The article I read this week was called “The Art of Writing Amazing REST APIs” by Joy Ebertz. In this article, Joy explores the principles for designing effective REST APIs. It discusses key practices such as naming conventions, including IDs and types, ensuring resource stability, and bridging gaps between current and ideal API states. I’m going to get into some interesting points or ideas she has, and talk a little bit more about them. To wrap it up, I will explain how I think it could be beneficial to take, or at least consider, the principles she describes and apply them to a project down the road.

The first thing she talks about, which I believe is one of the more important things she brings up as well, is consistency. It is the foundation to writing great REST APIs. Consistency can be, and should be, applied to every aspect of writing REST APIs as, when everything is consistent, confusion will be kept to a minimum which should be what everyone is aiming for. If one endpoint returns user_ID while another returns simply ID, you can see where confusion could occur. Thus, standardizing these things can make everything much more efficient.

Another point she mentions is that every resource should include a unique ID and Type field to ensure clarity/stability. Unique IDs prevent ambiguity, enabling resources to be fetched or referenced consistently and effectively. Including a type prepares APIs for future flexibility, such as supporting multiple resource types in a single response.

Joy talks about many different principles and ideas to writing REST APIs but the previously mentioned principles were a couple of the more important ones I believe, as well as just my favorite ones in general.

Career-wise, I think applying these principles would be very smart to do. Shooting for an extremely consistent REST API benefits everyone involved with it, and using unique IDs and Type fields only produces further benefits. Applying all the principles she mentions in the article would certainly be the best way, or likely one of the best ways, to write REST API, but obviously different people may find different things they like better so there may always be some differences in opinion over writing a good REST API. If anything, you should at least take from the article the principles she presents and try to apply them, in one way or another, to future writing as it would really only advance your skill on the matter regardless. This is certainly something I will try to apply given the opportunity.

https://jkebertz.medium.com/the-art-of-writing-amazing-rest-apis-dc4c4100478d

From the blog CS@Worcester – RBradleyBlog by Ryan Bradley and used with permission of the author. All other rights reserved by the author.

GitHub’s Multi-Model AI Approach

This week, I explored the article “GitHub Copilot Moves Beyond OpenAI Models to Support Claude 3.5, Gemini” on Ars Technica. It discusses GitHub Copilot’s recent expansion to integrate advanced AI models like Anthropic’s Claude 3.5 and Google’s Gemini, diverging from its earlier dependence solely on OpenAI. This development is a pivotal moment for AI-driven coding tools, as it allows GitHub to offer developers more diverse and powerful AI models tailored to different tasks.

The article highlights how GitHub Copilot, widely known for assisting developers by generating code snippets and reducing repetitive tasks, is evolving to deliver greater flexibility and efficiency. The inclusion of Claude 3.5 and Gemini enhances Copilot’s ability to handle more complex coding tasks, such as debugging and system design, while maintaining security standards. This shift also underscores GitHub’s focus on model diversity to better cater to developer preferences and workloads. Beyond coding, Copilot’s broader goal is to support the entire software development lifecycle, including planning and documentation.

I chose this article because our course emphasizes practical applications of programming tools and their role in optimizing workflows. GitHub Copilot, as an AI coding assistant, directly relates to concepts we’ve discussed about improving productivity and leveraging technology in software development. Additionally, we’ve been learning about programming tools and techniques that prioritize efficiency—qualities Copilot exemplifies. Understanding these cutting-edge advancements gives me insight into tools I may encounter in future projects or internships.

What resonated most with me was the article’s emphasis on customization and adaptability in AI-powered development tools. The idea that developers can now choose AI models best suited for specific coding challenges is intriguing. I also appreciated the focus on how Copilot is being adapted to aid more than just coding, reflecting the growing need for holistic development tools. The integration of AI into documentation and planning ties back to what I learned from Bob Ducharme’s blog post on documentation standards, reinforcing the interconnectedness of these areas.

This article expanded my understanding of how AI tools like Copilot are becoming indispensable in software development. I had previously viewed Copilot primarily as a code completion tool, but I now see its potential as a comprehensive assistant for developers, offering support from ideation to deployment. Learning about the integration of Claude 3.5 and Gemini also taught me the value of model diversity in addressing different problem domains. This understanding will guide me in choosing and utilizing similar tools effectively in the future.

In practice, I plan to adopt AI-powered tools like Copilot to streamline my coding process, especially when tackling repetitive tasks or unfamiliar languages. By using AI to augment my development workflow, I can focus more on problem-solving and innovative aspects of programming. Furthermore, as I become more familiar with these tools, I’ll aim to explore their broader capabilities, such as system design support and technical documentation.

Overall, this article is a must-read for anyone interested in how AI is transforming software development. It provides valuable insight into the next generation of development tools, showcasing how GitHub is positioning Copilot as an essential resource for developers. I highly recommend checking it out: GitHub Copilot Moves Beyond OpenAI Models to Support Claude 3.5, Gemini.

From the blog CS@Worcester – CS Journal by Alivia Glynn and used with permission of the author. All other rights reserved by the author.

Time to REST

In the past few weeks, we’ve been learning and working with REST and how it ties into the larger picture of software architecture. I found this section confusing at times but mainly interesting thanks to the connection to personal experience. Once I realized that this is essentially the behind-the-scenes of those HTTP errors I’ve seen all my life, I was excited to learn more. 

Web services that adhere to REST are called RESTful APIs or REST APIs for short. REST API is a UI (uniform interface) “that is used to make data, content, algorithms, media, and other digital resources available through web URLs” (Postman). They are defined by three main aspects: a base url, a media type for any data to be sent to the server, and standard HTTP methods. There are four HTTP methods that are generally used in REST APIs:

  • GET: This method allows for the server to find the data you requested and sends it back to you.
  • PUT: If you perform the ‘PUT’ request, then the server will update an entry in the database.
  • POST: This method permits the server to create a new entry in the database.
  • DELETE: This method allows the server to delete an entry in the database.

These methods will have different effects depending on whether it is used to address a collection or an element (Postman). 

The part of REST that I found most interesting and that I’ve seen before were the HTTP response codes. In the case that one tries to go to a website and all they are met with is a white screen with black text stating “404: Error” followed by some details, that is a response code. Response codes will slightly differ depending on the HTTP methods used but most common ones include: 

  • 200 OK
  • 404 Not Found
  • 400 Bad Request
  • 500 Internal Server Error

I found this topic to be pretty interesting and honestly necessary in the case that one works with the web. Knowing the internet and my luck, I’m sure to encounter many of these errors for the foreseeable future and it’ll serve as a reminder to what I’ve learned about REST.

Side Note: Just the other day, when I was trying to log into Blackboard, I was met with a 404 error. I thought I could just reload the page or open a new tab and get into Blackboard but I was still getting the error. It took about five minutes before I could actually log in. I realized that knowing the details of these HTTP responses doesn’t make them any less annoying.

Source: https://blog.postman.com/rest-api-examples/

From the blog CS@Worcester – Kyler's Blog by kylerlai and used with permission of the author. All other rights reserved by the author.

Software Licensing

For my second blog post, I will explore Software Licensing. I chose this topic because we have learned in class about the different types of licensing and what you can or can’t do with specific ones. There are multiple options to choose from when selecting a license for a program or software, and these differences can sometimes be confusing. I decided to dive deeper into this topic and found a blog titled “Understanding Software Licensing” by Fernando Galano.

The blog is divided into five sections: What is a software license? How does it work? Why does it matter? What are the types of software licenses? How do you find the perfect license for your software? The blog does a good job highlighting the importance of a license and the complexities of the legalities that come with terms and conditions. As we learned in class, a license is a type of legal contract between the creator(s) and the users interacting with the software. This interaction for the users can include the installation, modification, copying, or distribution of the software. It’s ultimately up to the creator(s) to select the license they want, which then defines what rights the users possess with the software.

In our class discussions about software licenses in model two, we discuss the different types of licenses and which is the best. The blog also does a good job explaining the differences between them while giving examples of companies that use certain licenses. Some examples of licenses for software are copyleft, public domain, permissive, and open source. Each of these licenses has its benefits and limitations, and it is up to the creator to line up the values they want their software to use. For example, a public domain has no restrictions on how the software can be used, but in comparison, a copyleft license forces derived work to have the same conditions.

The blog ends by explaining why licenses even matter and how to choose which best describes your situation. It explains that licenses have benefits for the user and the creator and these benefits help both parties understand the terms and conditions for its use while avoiding confusion. The author highlights the benefits for the user as an aid to managing tools and resources, clarifying how you can use them, and preventing unnecessary costs for tools the user may not need. The benefits for the creator include preventing unauthorized copying and distribution, limiting liability, and giving the creator control over the usage of their product. What I wish the blog talked about is the similar topic of copyright. We learned about this important topic in part one of software licenses by answering questions like, “You have a new idea for a program, can you copyright that idea?” or “You created a new program based on that idea, is that software copyright-able?”.

This blog did a great job explaining how licenses protect the user and creator while going in-depth on the different types to choose from.

Source: https://www.bairesdev.com/blog/understanding-software-licensing/

From the blog Mike's Byte-sized by mclark141cbd9e67b5 and used with permission of the author. All other rights reserved by the author.

A Reflection on Brygar’s Principles and OOD

Brygar’s principle of Simplicity resonates with Abstraction, as both focus on reducing complexity and emphasizing essential features. For example, an abstract Shape class might define a method draw(), but the specific implementation (how a circle or square is drawn) is handled in the subclasses Circle and Square. This simplifies the user interface by exposing only the necessary functionality.

Consistency aligns with Encapsulation, which involves bundling data and the methods that operate on it into a single unit and controlling access to the object’s internal state. For example, a Student class might have private fields like name and age. These fields are accessed or modified only through controlled public methods like getName() or setName(), ensuring that the class remains consistent in its behavior.

Modularity is closely related to Polymorphism, which enables objects of different classes to be treated as objects of a common superclass. By treating different objects as instances of the same class or interface, we can add new types without modifying existing code. For example, the Shape interface could allow the addition of new shapes (like Rectangle or Triangle) without altering the code for existing shapes.

Finally, Reusability directly connects with Inheritance, as inheritance allows one class to inherit properties and behaviors from another, making it easier to reuse and extend code. For instance, a Bird class could serve as a parent for Parrot and Eagle classes, where the child classes inherit common attributes like wingspan and methods like fly().

This blog post helped me understand how these design principles connect to each other. The assignment’s focus on defining the OOD principles and understanding their importance has deepened my appreciation for Abstraction and Encapsulation as tools for simplifying and protecting code. I now feel more confident in applying these principles to make my software designs both flexible and maintainable.

This blog post made me understand how it all connects. I now have a clearer understanding of how to approach system design in a more structured and efficient way. By focusing on Reusability and Simplicity, I’ll be able to write code that is easier to maintain and scale over time.

Link to the Resource:
Four Design Principles by Ivan Brygar

From the blog SoftwareDiary by Oanh Nguyen and used with permission of the author. All other rights reserved by the author.

Clean Code

Hello everyone

For this week’s blog topic, I will talk about clean code. This was one of my favorite topics that we have discussed in class so far.  Even before, I knew that writing good code is important, but I was unsure on how clean code was written. So far I had an idea, from doing exercises, solving problems and writing my own projects either for school or for fun. I thought my code was good enough that it would pass Uncle Bob’s test of clean code, but after reviewing again through his perspective it definitely does not pass as it needs a lot of improvement. Clean code is extremely significant as also stated by the author of the blog, writing code may take up 10% of a programmer’s time while reading and understanding code occupies the remaining 90%. In the blog, the author mentioned how this is even more accurate as in an open-source project like his, where external contributions can be done. The code needs to be clean enough, so the contributors can easily understand and spent most of their time writing code to add to the project. If the code was written poorly, it would make the experience of the contributor a nightmare as he would spend double of his time just reading and understanding, which would leave barely any time to add any new features. If the project goes to be used by multiple users, bugs would eventually pop up and fixing them in a timely manner when the code is a mess it would be very annoying. Even adding new features, it would be extremely time-consuming as you have to do twice the work if the code was written good. This is one of the reasons why I chose this blog from the rest.  The author emphasizes the real problems that come with writing bad code and how much it will cost you time wise to maintain your project up and running. 

Later on the blog, the author points out the importance of functions and formatting. These are two simple things that anyone can learn and apply it to any of their work. 

Some key points that I wrote down for myself were that functions should be kept small and compact. The principle states that small functions increases clarity and reduces clutter. It makes it easy on the eye as it scrolls through the code, knowing what the function does and where it is used in an instance. The second point he mentioned was both vertical and horizontal formatting. He compared it to the Newspaper Metaphor, which it suggested that code should present key information at the top. It is also important to create vertical spaces between different functions which allows separations, and creating a format which is followed throughout the rest of the project. 

Horizontal formatting focuses more on maintaining manageable line lengths. We don’t want the lines of code to be too long, and they should be viewing compatible for every screen size. It is not recommended and always avoided scrolling horizontally in your code as it can become annoying over time and time-consuming. This code in a way was a reflection of my journey towards Clean Code. I had similar mistakes that I did in that past not knowing how to write clean code, but after reading more about it now I am capable of writing clean code. This is a valuable skill that my future self will thank me for mastering!

https://medium.com/codex/reading-clean-code-week-2-643641e4dc28

From the blog CS@Worcester – Site Title by Elio Ngjelo and used with permission of the author. All other rights reserved by the author.