Category Archives: Technology

A License to Develop Software

I read a blog titled “Software License Management” by Samantha Rohn of Whatfix. It dives into the complexities of software licensing, explaining the different types of licenses and their implications. Since I’ve been learning about open-source projects and legal considerations in software development, this blog felt like an essential read. I picked this blog because software licensing is a topic that many developers, including myself, often overlook or misunderstand. In my coursework, we’ve briefly touched on the importance of licenses, but I never fully grasped the differences between them or their real-world applications. As I start working on team projects and open-source contributions, understanding how to navigate licensing is crucial to avoiding legal issues and contributing responsibly to the developer community.

The blog provides an overview of software licensing, emphasizing why it’s critical for both developers and organizations. It categorizes licenses into two main types:

  • Permissive Licenses: These allow more flexibility. Developers can modify, distribute, and use the software with minimal restrictions, often without the need to release their modifications.
  • Copyleft Licenses: These require derivative works to retain the original license terms. For example, modifications to a product under a copyleft license must also be distributed with the same license attached.

The post also introduces the concept of software license management, highlighting the need for organizations to track, organize, and comply with licenses to avoid legal and financial risks. It concludes with best practices for effective license management, such as inventorying all software assets and ensuring compliance with usage terms.

This blog was an eye-opener for me. One thing that stood out was the explanation of copyleft licensing. Before reading this, I didn’t realize how restrictive some licenses could be in terms of sharing modifications. For instance, if I modify software with a copyleft license, I’d have to release my work under the same license, which might limit its use in proprietary projects. This insight made me rethink how I approach licensing for my own projects.

I also found the section on license management practices especially relevant. As developers, we tend to focus solely on the technical aspects of coding and ignore legal considerations. However, knowing how to choose and manage licenses is equally important, especially as I start collaborating on larger projects.

This blog gave me a clearer understanding of how to responsibly use and share code. Moving forward, I’ll make sure to read and understand the terms of any license attached to the libraries and frameworks I use. Additionally, when I create software, I’ll carefully select a license that aligns with my goals, whether for open-source contribution or proprietary use. If you’re new to software licensing or want to understand how to manage licenses effectively, I recommend reading thisblog. It’s a straightforward guide to a topic every developer should know.

Resource:

https://whatfix.com/blog/software-license-management/#:~:text=For%20the%20most%20part%2C%20copyleft%20licensing%20is,with%20the%20source%20product’s%20copyleft%20license%20attached.

From the blog Computer Science From a Basketball Fan by Brandon Njuguna and used with permission of the author. All other rights reserved by the author.

Semantics Antics

Recently, I came across an interesting blog post titled “A Beginner’s Guide to Semantic Versioning” by Victor Pierre. It caught my attention because I’ve been learning about software development best practices, and versioning is a fundamental yet often overlooked topic. The blog simplifies a concept that is vital for managing software releases and ensuring compatibility across systems. I selected this post because, in my current coursework, semantic versioning keeps appearing in discussions about software maintenance and deployment. I’ve encountered terms like “major,” “minor,” and “patch” versions while working on team projects, but I didn’t fully understand their significance or how to apply them effectively. This guide promised to break down the topic in a beginner-friendly way, and it delivered.

The blog explains semantic versioning as a standardized system for labeling software updates. Versions follow a MAJOR.MINOR.PATCH format, where:

  • MAJOR: Introduces changes that break backward compatibility.
  • MINOR: Adds new features in a backward-compatible way.
  • PATCH: Fixes bugs without changing existing functionality.

The post emphasizes how semantic versioning helps both developers and users by setting clear expectations. For example, a “2.1.0” update means the software gained new features while remaining compatible with “2.0.0,” whereas “3.0.0” signals significant changes requiring adjustments. The author also highlights best practices, such as adhering to this structure for open-source projects and communicating changes through release notes.

Reading this blog clarified a lot for me. One key takeaway is how semantic versioning minimizes confusion during development. I realized that in my past group projects, we sometimes struggled to track changes because we didn’t use a structured versioning approach. If a teammate updated a module, we often didn’t know if it introduced breaking changes or just fixed minor issues. Incorporating semantic versioning could have streamlined our collaboration.

I also appreciated the blog’s simplicity. By breaking down each component of a version number and providing examples, the post made a somewhat abstract topic relatable. It reminded me that software development isn’t just about writing code but also about maintaining and communicating it effectively.

Moving forward, I plan to adopt semantic versioning in my personal projects and advocate for it in team settings. Using clear version numbers will make my code more maintainable and professional, especially as I contribute to open-source projects. If you’re looking to deepen your understanding of software versioning or improve your development workflow, I highly recommend checking out Victor Pierre’s blog. It’s a quick, insightful read that makes a technical topic approachable.

Resource:

https://victorpierre.dev/blog/beginners-guide-semantic-versioning/

From the blog Computer Science From a Basketball Fan by Brandon Njuguna and used with permission of the author. All other rights reserved by the author.

Enhancing Development with Software Design Patterns

“Design patterns represent common software design problems and well-tested solutions to those problems.” This is a line from my class’s first exercise introducing us to design patterns. In it we learned that in order to have scalable code, certain types of solutions, design patterns, are used. They are the culmination of previous developers’ struggle adding functionality to already existing code.

When we learned about design patterns in class and the homework, we handled singleton, strategy, simple factory design patterns. This GeeksforGeeks article adds onto the classwork by first separating their list into Creational, Structural, and Behavioral types. Creational patterns address when objects are made by separating how the object is formed from how it is implemented. Included in this type are the Factory and Singleton patterns we had already seen as well as new patterns called the Prototype, Builder, and Abstract Factory patterns. Under the Structural category are methods that handle class/object composition, so they utilize inheritance and help to structure efficient interfaces or implementations. Here they included the Adapter, Bridge, Composite, Decorator, Facade, Proxy, and Flyweight patterns all brand new to me. Finally came the Behavioral patterns that at first brush sounded like it was primarily focused on solely on the responsibility of objects and classes but actual include how these objects and classes communicate with each other. In this section returned the strategy design pattern along with Observer, State, Command, Chain of Responsibility, Template, Interpreter, Visitor, Mediator, and Memento patterns. At the end of this article is an FAQ section where they explain things such as how you can compare algorithmic solutions to design patterns in terms of computational solutions and structural solutions.

I chose this article because it showed me an entire new category of design patterns that tackle interface creation, something that I personally find to be a weak point in my understanding of OOP design. I actually clicked into the Bridge design pattern because it allows for abstraction and implementation to be developed separately. So when you have multiple subclasses of subclasses, their example used ProduceBus and AssemblyBus under the Bus class under the Vehicle class, you have an issue any time you wish to modify the middle level (Bus) class. The Bridge pattern says to separate the Produce and Assembly bus implementations into their own subclass of an interpreter called Workshop that works on objects of the Vehicle class. This way changing the Bus class doesn’t directly change how the Produce and Assembly portions work, which thus saves time.

I have thus bookmarked this page so that until I can pull these patterns from memory I can make use of these numerous proven solutions. It is an amazing resource since it has links to more in depth explanations of each design pattern so that readers can truly grasp just how these tricks work in practice.

Link:
https://www.geeksforgeeks.org/software-design-patterns/

From the blog CS@Worcester – Coder's First Steps by amoulton2 and used with permission of the author. All other rights reserved by the author.

Connecting “Copyrights in AI” to Copyright and Licensing Homework

In the rapidly advancing world of artificial intelligence (AI), the intersection of technology and law has become increasingly complex. One of the most pressing legal issues is how copyright laws apply to AI-generated content. This is exactly what the article, “Copyrights in AI: Legal Overview” from HackerNoon offers, the author discusses the implications of copyright laws in the context of AI, focusing on whether AI can be considered an author of creative works, and how this impacts the rights of those who use AI to create content.

The article provides a clear overview of the current state of copyright law as it pertains to AI. Traditionally, copyright laws have protected works created by human authors, but with the rise of AI-generated content, it led me to ask: “can an AI be considered an author in its own right, or does the copyright belong to the human who programmed the AI, or the user who directed its output?” I learned that, under current law, AI cannot be considered an author in its own right, and the copyright typically belongs to the human creator or the user of the AI. This reflects a fundamental principle that we explore in our class, especially when considering software licenses. For example, when choosing a license for a software project, it is essential to understand the ownership of contributions and the rights of the contributors.

I selected this resource because the legal implications of AI are an area of particular interest to me, especially as AI continues to grow in influence and application across various industries. In one of my other classes, Computing Ethics, we talked about the ethical responsibilities and legal dilemmas surrounding the use of AI. The context being medical fields or business, how would the use of AI affect the users using it. This article connects those themes by highlighting the legal aspects of AI usage and authorship, which I had not fully considered before. It helped me understand that as AI technology becomes more sophisticated, the law may need to adapt to address new challenges.

By exploring “Copyrights in AI: Legal Overview” and reflecting on the licensing aspects discussed in my homework, I have gained a deeper understanding of how AI-related legal issues intersect with software licensing. In our Copyright and Licensing Homework, we focus on understanding different licensing models and the implications they have on the use and distribution of software so understanding who owns the rights to AI-generated works is critical to deciding how those works can be shared, modified, or distributed.

I expect to apply this knowledge when working with software projects, ensuring that the terms and conditions of any AI tools or systems used are clearly defined. As AI continues to grow in capabilities and its integration into software development increases, I believe this knowledge will be essential to navigating the complex legal landscape.

Link to the resource: HackerNoon article: Copyrights in AI: Legal Overview

From the blog SoftwareDiary by Oanh Nguyen and used with permission of the author. All other rights reserved by the author.

Discovering Design patterns

Hello Debug Ducker here again and have you ever thought about how your code is structured? I mean you probably have been doing simple code etiquette, but have you ever thought about how you could make it less say more manageable and neater to save yourself the trouble

Here is an example of a code design on UML from a programming assignment

The basic gist of this is that we are making ducks and applying qualities to them. As you can see there are different types of duck especially my favorite the rubber duck. But I am sure you can see a problem with this. Despite them being all ducks, not all the attributes of a duck can apply to certain ones as shown with decoy duck and rubber duck. Their quack and fly methods would be different, So we have to override them to do something else. This can get tedious especially if we were to add more ducks. Also makes the abstract class feel pointless because of this. So this is where Design Patterns are implemented

Instead of overriding the fly and quack methods in the different types of ducks, we add functions that can apply the behaviors themselves without needing to modify methods within ducks. The Relevant design pattern here is known as Strategy Pattern, and that’s when we get into the real meat of things. 

Design Patterns as the name suggests are designs that programmers can utilize to fix certain problems in their code, whether it’s readability, managing the code, or streamlining a process. Strategy Pattern is the design pattern that splits the specifics of a class into other methods, such as the example of the fly and quack behaviors which were originally a part of several other ducks with different qualities. This helps us whenever we want to add a duck with a different behavior, one of the behavior methods could be applied. There are several other design patterns out there such as factory design which creates objects through what called a factory method, for example, if the rubber duck method is made then an object with rubber duck qualities will be made

Here is code of an example of what a factory method would look like

There are a lot more patterns to choose from that can help you with all your coding problems. Geeksforgeeks has a great article explaining them and even more of the patterns to show

https://www.geeksforgeeks.org/software-design-patterns/

Design patterns can be useful for many coding problems, whether It’s to restructure your code to make working on it easier or refactor it to make the functionality better. I can see myself using theses whenever I would encounter a problem.

“Software Design Patterns Tutorial.” GeeksforGeeks, GeeksforGeeks, 15 Oct. 2024, http://www.geeksforgeeks.org/software-design-patterns/.

Guru, Refactoring. “Strategy.” Refactoring.Guru, refactoring.guru/design-patterns/strategy.

From the blog CS@Worcester – Debug Duck by debugducker and used with permission of the author. All other rights reserved by the author.

Is Open Source Best?

https://medium.com/chiselstrike/why-open-source-is-a-terrible-way-to-build-products-yet-most-great-products-use-open-source-c3bf9e201648

This post by Glauber Costa, an entrepreneur and former open source developer, discusses why open source is a horrible way to develop products but the best way to develop technology required for products. Technology and product are differentiated as product being something that uses technology to achieve some end market goal. I’ve always thought that open source was the best option for the vast majority of developers, with little exception outside of larger businesses where proprietary licenses would be best. This post challenged that notion.

When I look around the modern tech industry, I constantly see open source. I’ve known more people than I can remember who’ve used Godot, Shotcut, Krita, and Audacity; I’m writing this post on WordPress. However, the blog makes a great point: none of these products are better than their proprietary counterparts for general users. I always saw the community of open source developers and the possible self-customization of the works as the greatest strength of the licensing; I never for a moment considered how this lack of centralization in development is also Open Source’s biggest weakness. 

In ligth of this post, when I look around the Open Source world, I see very few products that my non-developer friends would ever use. For example, while I think Linux is great, I think all of them would prefer to stick with their Windows machines even if they had experience with Linux. I never thought to disconnect the impressive feats of community development as they pertain to the amazing innovations they’ve made to technology from the products that are released with said innovations, but in reflection, it makes perfect sense.

Open source development allows people to work on what they want when they want, and it gives individuals the chance to use their passion as an opportunity to solve problems they deeply care about. Proprietary development, on the other hand, gives people explicit goals aimed at creating the best end-user experience for everyone, with typically a strong centralization that ensures the product is polished, easy to use, and accessible to a wider audience. When you put it like this, it’s only logical that open source would result in the most innovative pieces of technology created by people with the drive to do so, while proprietary would result in the best products as they have an explicit focus to create something that would be best for the general user even if it comes at the loss of niche features for small groups of users.

From the blog CS@Worcester – CS ZStomski by Zachary Stomski and used with permission of the author. All other rights reserved by the author.

How Much Formatting is Too Much?

https://www.steveonstuff.com/2022/02/09/nitpicky-code-reviews-are-are-drag

This post from Steve Barnegren discusses his issues with the current state of team code review. Specifically, the blog takes time to point out the issues with being overly obsessive about nice formatting. It is one thing to point out flaws in logic and potential failures the system may have, giving feedback on how it might be improved or fixed, and another to, for example, say a ternary operator should be used instead of an if-else. The argument is made that a majority of ‘formatting issues’ of the variety I’ve given do not give enough value for the time they take between maintainers and developers.

Personally speaking, I like to have my code in a very consistent format. If just one thing is in a different format, it seriously bothers me. It wasn’t until recently when I started working on a project as a member of WSU’s Computer Science Club that I personally had to work in a team larger than two people, and in doing so I found out pretty quickly that many people do not hold the same standards to their code.

One team member very specifically does not care at all about how the code is formatted, focusing solely on efficiency and output. At the beginning of the project, I gave pushback on this, believing that the code he was writing was very poor if we wanted to maintain standards, but I was assured it wasn’t a big deal. All of us were working on different parts of the project and generally were disconnected from one another until it came time to connect things. I specifically was on my own creating the GUI of the project. However, the issue finally came when it was time for me to start working on the backend. What I found was a disaster, not in terms of functionality necessarily; there were definitely errors in output, but that wasn’t my concern. The real disaster was the cleanliness of the code. Trying to figure out what was going on, how things were calculated, what and where things were stored, it was a lot of tracing. 

By the time I finally understood what was going on, it took me very little time to do what was asked of me, but the process to get there should not have taken that long. The person who originally wrote the backend is now working to create extensive documentation so that way people don’t have to go through that process again. If there had been consistency in the formatting of the code, clear demonstration of how things functioned, and precautions taken to make sure things did not get out of hand, I feel it wouldn’t have been nearly as bad. 

Although I hear this blog’s thoughts, I hear them echoed in the person who originally said it wasn’t a big deal. In my opinion, the condition for a team to have code reviews like the one Steve recommends must be that all team members already agreed and showed the capability to write code that is clean and makes sense, or else you get the horror I had to go through. Generally, I think code reviews should be unique to every team, because the same rules don’t work for everybody, and that the nitpicks have their place in teams.

From the blog CS@Worcester – CS ZStomski by Zachary Stomski and used with permission of the author. All other rights reserved by the author.

Code Review

Source: https://about.gitlab.com/topics/version-control/what-is-code-review/

This article is titled “What is a code review?” As clearly stated by the title, the article explains the processes of code reviewing. “A code review is a peer review of code that helps developers ensure or improve the code quality before they merge and ship it.” Code reviews help in the identification of bugs, increase the overall quality of code, and enhance understanding of the source code. Code review, as suggested in the name, happens after a software developer has finished coding. Code needs to be checked before it is merged into an upstream branch for bugs or conflicts. A code reviewer “can be from any team or group as long as they’re a domain expert. If the lines of code cover more than one domain, two experts should review the code.” Adhering to a solid code review process allows for continuous improvement of code and aims to ensure that faulty code isn’t being implemented for customers/users to see and use. This process isn’t just important for the code itself, but also for all of the team members of a software development project. Whilst reviewing the code, meaningful knowledge of the source code is shared between team members to ensure that it is being implemented properly. The main benefits of the code review process are: the sharing of knowledge, discovering bugs earlier, maintaining compliance, enhancing security, increasing collaboration, and improving code quality. Code reviews allow for maintaining compliance because different developers have different backgrounds and thus different personal processes when they are developing. Code reviews allow these people to get together and maintain a standard coding style. Security is enhanced because “security team members can review code for vulnerabilities and alert developers to the threat. Code reviews are a great complement to automated scans and tests that detect security vulnerabilities.” There are many benefits to code review, but there are some disadvantages, including: longer time to ship, focuses being pulled from other tasks, and large reviews mean longer review times. These can be described as necessary evils due to the sheer amount of positives that code reviews offer in software development.

I chose this article because it was published by GitLab, a software that we are heavily using in class for version control, and I thought that it would be interesting to read this specific topic from the syllabus. Version control softwares such as GitLab allow code reviews to happen, so diving deeper into the topic in an article published by this popular software company was tempting. Before reading this article I understood that code reviews were important to pinpoint any bugs or difficulties before merging code into the upstream, but I never really thought about the implications of security or different development styles. I’ll definitely keep this information in mind during future code reviews on the job to remind myself that bugs aren’t the only important thing during a code review.

From the blog CS@Worcester – Shawn In Tech by Shawn Budzinski and used with permission of the author. All other rights reserved by the author.

YAGNI

Source: https://www.geeksforgeeks.org/what-is-yagni-principle-you-arent-gonna-need-it/

This article is titled “What is YAGNI principle (You Aren’t Gonna Need IT)?” YAGNI is “a principle in software development that suggests developers should only implement features that are necessary for the current requirements and not add any additional functionality that might be needed in the future.” The reasoning for this is that if you add features that might potentially be needed in the future, there will be risk for more bugs, increased complexity, and increased times of development, thus leading to increased cost. The YAGNI principle is similar to the KISS principle (Keep It Simple, Stupid), which also advocates for simplicity, it encourages developers to avoid complexity when it isn’t necessary. Developers should follow the YAGNI principle if they wish to keep the following costs in mind: the cost of building, delay, carry, and repair. The cost of building refers to the total cost of efforts and resources implemented in the project. Building things that aren’t needed leads to increased costs overall. Cost of delay refers to missed opportunities, if you spend time on unnecessary features, the development of more important ones will inevitably be delayed. Cost of carry refers to the difficulties of having unnecessary complex features. These complexities make it difficult to work on other parts of a software project, require more time, lead to an increased cost, and overall cause harder times moving forward. Lastly, the cost of repair, or technical debt, refers to the costs associated with bugs or mistakes that occur during the development process. YAGNI is important to ensure that the development process is focused, efficient, and cost-effective. YAGNI can be implemented into your code by prioritizing communication between team members. Ensuring that necessary requirements are met, a simple plan is made, ignoring ideas that don’t meet goals or deadlines, and keeping good records of project progress will allow your team to follow the YAGNI principle. YAGNI allows for simplicity, faster development, flexibility, reduced risk, and cost savings by complementing other development principles while prioritizing unnecessary implementations.

I chose this article because I appreciate how geeksforgeeks simplifies topics within the software development community. I don’t recall this principle being explicitly mentioned in class, but we have definitely alluded to it and I thought it’d be beneficial to read about it more, considering that it is in the syllabus. It was interesting to learn that the YAGNI principle complements other software development principles, such as the KISS principle, and compiles them into a unique principle that prioritizes simplicity over complexity and more features. It embodies the idea of “less is more.” This is a great set of guidelines I’ll be sure to follow in industry because it promotes that sometimes less work isn’t a bad thing. Instead of creating a multitude of features, ensuring that the ones that are critical, and required sooner, are being developed, will still get the job done.

From the blog CS@Worcester – Shawn In Tech by Shawn Budzinski and used with permission of the author. All other rights reserved by the author.

Workflow for a Developer


This week, I came across an post titled “Improving Developer Workflow” on Vercel’s blog, and it caught my attention because I’ve been trying to figure out how developers stay productive while coding. The article dives into different ways to make workflows more efficient, focusing on tools and practices that help developers ship better code faster. Since I’m new to computer science and still figuring out how to work effectively, this post felt super relevant to my learning journey.

The post highlights key aspects of improving developer workflows. It starts by discussing the importance of having fast feedback loops, meaning developers should quickly see the results of their code changes. This post introduces tools like Vercel’s platform, which makes it easy to preview, test, and deploy changes almost instantly. Another focus is on collaboration, emphasizing how tools like GitHub help teams share work and review code seamlessly. It wraps up by stressing the value of automation, like setting up CI/CD pipelines, to reduce repetitive tasks and ensure consistent quality in the codebase.

I chose this post because workflow optimization feels like an essential skill for any developer, even beginners. Sometimes I get stuck on repetitive tasks or wait too long to test my code changes, which can be frustrating. This post seemed like a good way to learn how experienced developers streamline their processes. Also, tools like GitHub and CI/CD were mentioned in class, so I wanted to understand them better.

The main thing I learned is how fast feedback loops can save a lot of time and frustration. For example, using tools like Vercel lets developers instantly preview their changes in a live environment, so they don’t have to guess if their code works. I also learned how CI/CD pipelines automate testing and deployment, which not only saves time but also reduces the risk of errors. I realized that these tools make a developer’s life easier, but they also require some setup and understanding, which I’m excited to learn more about. Another cool takeaway was how much collaboration matters in a developer’s workflow. I’ve used GitHub for simple projects, but the blog post made me realize how powerful it can be when teams use it for pull requests, code reviews, and tracking changes.

This blog post made me want to improve my own workflow by setting up faster feedback systems, even for small projects. I also plan to explore tools like GitHub Actions to try basic automation for testing. In the future, I hope to use these techniques to work more effectively on team projects and avoid common frustrations like repetitive tasks.

Resource:

https://vercel.com/blog/improving-developer-workflow

From the blog Computer Science From a Basketball Fan by Brandon Njuguna and used with permission of the author. All other rights reserved by the author.