Category Archives: CS-343

Semantic Versioning

Summary: Semantic Versioning (SemVer) is an essential system for versioning software. This blog post explores its core components, including major, minor, and patch versions. It also delves into pre-release and build parts and how version precedence works in SemVer. At the heart of SemVer lies a structured approach to versioning. It all starts with three fundamental numbers: major, minor, and patch. Each of these numbers carries specific meanings, making them the building blocks of compatibility and change management.

Reason for Selection: I chose this resource because semantic versioning is crucial in software development, and I wanted to gain a deeper understanding of how it works. This is also in line with what we are covering in class and in the development of Thea’s Food Pantry.

Comments on Content: The resource does an excellent job of simplifying the concept of Semantic Versioning. It provides a clear explanation of how version numbers are structured and the significance of each part. I now understand that the MAJOR version indicates major, potentially breaking changes, while MINOR versions introduce new features in a backward-compatible way. PATCH versions are for bug fixes.

One aspect of SemVer that I found particularly enlightening was the discussion on pre-release and build parts. Pre-release versions, marked with a hyphen, are often used to share software informally for testing. They may not be stable and might not satisfy compatibility requirements implied by the normal version. Build metadata, indicated by a plus sign, is used to record information about the build process, like who made the build, build machine, and more. This is especially valuable for tracking software development. I could

Understanding the precedence of version numbers is crucial for managing dependencies effectively. The resource explains that version comparison in SemVer is done from left to right. Versions that only differ in the build part are considered equal. Pre-release versions have lower precedence than their standard counterparts. Numeric identifiers always have lower precedence than non-numeric identifiers. This knowledge has provided me with a better grasp of how to choose the right versions when managing project dependencies.

Personal Reflection: This resource has clarified many of my questions about Semantic Versioning. It has demystified what can be a complex topic, making it accessible for developers at a beginner or intermediate level of experience like my own. Now, when I encounter version numbers, I can make informed decisions about their compatibility and the potential impact on my projects. As a software developer, I anticipate that this knowledge will be invaluable in ensuring the smooth integration of various software components and libraries. It will help me avoid compatibility issues and streamline the process of managing dependencies, ultimately saving time and effort in the development process.

Link: Semantic Versioning Resource

From the blog CS@Worcester – Abe's Programming Blog by Abraham Passmore and used with permission of the author. All other rights reserved by the author.

The StackOverflow Problem

The world of Computer Programming is one that is comprised of people from every corner of the globe, from countless different social and economic backgrounds, which has led to one of the most diverse fields of study in the world. Some come from well-established foundations; utilizing the resources at their disposal to create new and innovative products. Many more come from more humble beginnings, forging their own path and transforming the world around them. Yet, across this wide spectrum of developers, researchers, and teachers, they all share one thing in common:

At some point in time, at the beginning of their studies, they had relatively no idea what they were doing.

Sure, this sounds harsh. But it’s true for quite literally every field of study imaginable. No one is born with the ability to write machine learning algorithms, develop web applications, or any other kind of work in the field. At one point or another, this knowledge was learned, practiced, and perfected.

In the world of computer programming, the amount of seemingly trivial yet incredibly difficult problems one may encounter can be quite high. They can often be hyper-specific issues where directly posing the question at hand to your peers may be the most effective way of coming to a solution. And in the modern age of the internet, an incredible medium of getting answers proves time-and-time again to be a rock-solid support for developers: StackOverflow.

Munroe, R. (2023) XKCD. http://www.xkcd.com/979

StackOverflow is a forum made for programmers to post issues and questions related to their work, and to receive answers from other programmers on how to resolve these issues. It can be astonishing how regardless of how niche or specific you think your issue may be, there’s an extremely good chance that someone else had the same problem, posted it, and received multiple different solutions. StackOverflow has the benefit of being powered by the vast workforce of the internet, where millions of people can chime in and offer their assistance. However, therein lies the problem:

The internet can be full of jerks.

Of course, I don’t mean to discredit the vast majority of users who are often incredibly helpful and welcoming to newcomers. The usefulness of StackOverflow and similar sites cannot be overstated. The core issue, in my opinion, is the vocal minority who either through arrogance or simply a lack of tact, can scare away those new to the programming field with a sarcastic attitude and an aura of superiority.

This isn’t just anecdotal either. Nate Swanner from Dice.com writes about how in 2019, after StackOverflow made efforts to convince users to “be nice” to each other, the annual developer survey reported that 73 percent of users saw no change in how welcoming the site was to new users. He acknowledges that while the vast majority of users are able to find answers to their questions, there exists a problem in how the forum treats its fellow developers.

“When you’ve got to wade through a river of ego and spite before being told to ‘Google it,we start to wonder how long people will tolerate a Stack Overflow where a “cultural shift” hasn’t yet taken hold.

While one of my least-favorite things to do is to harp on a problem without offering any kind of solution, I really don’t believe there is one singular fix to this kind of issue. After all, this isn’t a phenomenon exclusive to StackOverflow, or even the Computer Science field for that matter. Sometimes, when we hear a question from someone that to us sounds trivial, our knee-jerk reaction is to think “Dude, really? THAT’S your issue?”

This is the mentality that I believe should be addressed and challenged more often in order to create a more welcoming community. The field of computer programming is growing fast. And there are definitely indications that we’re moving in the right direction. That same developer survey found that users on StackOverflow who were people of color felt more welcome than in previous years, which is a great step forward. However, fields of study are driven by the people who study them. And the more welcoming and supportive these groups are, the more a field will grow.

A quick side note: I hope this post didn’t end up being too philosophical; I had originally planned to write about a completely different topic, however after hearing about some experiences from a friend of mine who has just started his freshman year in a CS program, I felt that this was a topic worth discussing.

Citations:

Nate Swanner. “It’s Not Just You: Stack Overflow Is Still Full of Jerks.” Dice Insights, Dice, 18 Apr. 2019, http://www.dice.com/career-advice/stack-overflow-many-jerks.

From the blog Butler Software Construction, Design, and Architecture by Griffin Butler and used with permission of the author. All other rights reserved by the author.

Embracing Simplicity: YAGNI – You Ain’t Gonna Need It

In the ever-evolving world of software development, there’s a principle that often serves as a guiding light, a beacon of practicality amidst the allure of endless possibilities. It’s called YAGNI, which stands for “You Ain’t Gonna Need It.” This principle challenges developers to adopt a minimalist approach, focusing on what’s essential and avoiding the temptation to add functionality that may never be used.

At its core, YAGNI encourages developers to resist the urge to build features, components, or solutions that are not immediately necessary. It advocates for a “just-in-time” mindset, where development efforts are directed only towards addressing the existing requirements. The rationale is straightforward: speculative or preemptive additions can lead to unnecessary complexity, increased development time, and even bugs. By embracing YAGNI, developers maintain a clear focus on the immediate goals, prevent “over-engineering,” and ensure that the software remains lean and efficient. The principle not only enhances software development but also simplifies the debugging process and ensures that the codebase remains clean and manageable.

While YAGNI is a powerful concept, it’s essential to strike a balance. The key is to avoid unnecessary features but still be open to evolving requirements. Flexibility is vital, and YAGNI shouldn’t be used as an excuse to resist necessary changes. In the dynamic world of software development, simplicity, adaptability, and efficiency remain the true hallmarks of excellence.

References:

From the blog CS-343 – Hieu Tran Blog by Trung Hiếu and used with permission of the author. All other rights reserved by the author.

Unveiling the Blueprint of Software Architectures: The Foundation of Digital Development

In the intricate world of software development, one essential factor underpins the creation of every digital marvel – software architectures. These structural frameworks are the unsung heroes, the master plans guiding the intricate construction of software applications. They serve as the invisible hand that shapes the organization of an application, defining its key components, the relationships between them, and the fundamental principles that govern their interactions.

Software architectures, though often behind the scenes, are pivotal in crafting software that’s not just functional but also efficient and tailored to meet specific requirements. They’re akin to the architects of a grand skyscraper, ensuring that each piece falls into place seamlessly, resulting in a robust and scalable digital structure.

Understanding the diverse architectural styles empowers developers to choose the right path for their projects. It’s akin to a skilled craftsman selecting the finest tools and materials for a unique creation. The choice of architecture significantly influences various aspects of a software system. It impacts the system’s performance, scalability, maintainability, security, and adaptability to change.

Embracing the versatility of architectural styles is akin to choosing different brushes for a painting. The software architects are the artists, and the blueprint they select is their canvas. As software development progresses, these architectures are not just abstract concepts; they become the very foundation upon which the digital world evolves.

References:

From the blog CS-343 – Hieu Tran Blog by Trung Hiếu and used with permission of the author. All other rights reserved by the author.

Encapsulation | One of the Four Pillars of Object-Oriented Programming

There are four pillars in Object-Oriented Programming (OOP), one of which is ‘encapsulation’, which refers to the good practice of hiding the inner works of an object and its implementation details, where access is only possible through public methods. As one of the pillars of OOP, it is beneficial to me as a learning programmer to understand about the purpose of ‘encapsulation’ and how it can be effectively applied.

I realized during one of my classes — when we were reviewing these four pillars — that while I knew the principle by name, I had never learned encapsulation in detail nor how to best implement it into my code. Looking back at code I had written in the past, it was clear that I did not always follow this practice (i.e. using dangerous global variables as attributes in my objects).

The blog “The four pillars of object-oriented programming — part 1 — encapsulation”, written by Bas Dijkstra, helped better illustrate for me what encapsulation is supposed to do.

Firstly, encapsulation is all about making attributes private. By doing so, an object’s attribute is protected from being accidentally — or maliciously — altered. Instead of accessing the attribute directly, the value can and should be returned through a public method within the object, essentially making the attribute ‘read-only’.

By doing this, a programmer gains more security and control over the code, reducing the amount of unwanted behavior in the system as a whole.

Secondly, good encapsulation is the product of good design. In another blog, “Encapsulation in Functional Programming”, by Mark Seemann, which I found by looking deeper into the strategies of implementing encapsulation into code, talks about creating a ‘contract’ for each class. The contract outline three properties of a class:

  1. Preconditions — what are the minimal requirements for the object to function?
  2. Invariants — which attributes of the object do not/cannot change?
  3. Post-conditions — what are the boundaries/rules of the object after it is created?

Though the article focuses more heavily on encapsulation for functional programming, the design strategy also benefits OOP. Having these properties outlined in the contract of a class helps the programmer understand not only what needs to be encapsulated, but also how it can be encapsulated.

In addition, because of the nature of encapsulation and the amount of control it provides a programmer over the code, testing for such classes that follow the principle becomes much simpler and easier to do .

While a simple principle, encapsulation is a powerful and practical strategy that makes code secure and easy to manage by hiding the inner works of a class behind public methods.

Sources:

From the blog Stories by Namson Nguyen on Medium by Namson Nguyen and used with permission of the author. All other rights reserved by the author.

about plant uml

One of the reason I like PlantUML is that it forces me to design leveraging a common and standardized framework. It adds a level of formality to the process, which ensures I am conceptualizing my architecture correctly and communicating it clearly with concepts that everyone agrees on. It is a value I learned from the Army: we practices a standard way of making decisions and writing orders. By leveraging a standard modeling language I am not reinventing the wheel every time.

From the blog CS@Worcester – Andres Ovalles by ergonutt and used with permission of the author. All other rights reserved by the author.

An Introduction to REST APIs: Simplifying Communication in the Digital World (Week-5)

In today’s interconnected digital landscape, communication between various software applications is essential. Whether you’re ordering a pizza online, checking the weather on your smartphone, or browsing your favorite social media platform, chances are you’re interacting with a REST API (Representational State Transfer API) without even realizing it. In this blog post, we’ll delve into the world of REST APIs, exploring what they are, how they work, and why they are so crucial in the modern web ecosystem.

What is a REST API?

At its core, a REST API is a set of rules and conventions for building and interacting with web services. REST, which stands for Representational State Transfer, is an architectural style that was introduced by Roy Fielding in his doctoral dissertation in 2000. RESTful APIs adhere to these principles, making them straightforward and efficient for developers to work with.

A REST API exposes a collection of resources, which can be thought of as objects or data entities, over the internet. Each resource is identified by a unique URL, and clients (e.g., web browsers, mobile apps, or other software systems) can use HTTP requests to perform various operations on these resources. The four primary HTTP methods used in RESTful APIs are:

  1. GET: Retrieve data from the server.
  2. POST: Create a new resource on the server.
  3. PUT: Update an existing resource on the server.
  4. DELETE: Remove a resource from the server.

REST APIs rely on a stateless client-server architecture, meaning that each request from a client to a server must contain all the information required to understand and process the request. This simplicity and separation of concerns are some of the reasons behind the widespread adoption of RESTful APIs.

Key Concepts of REST APIs

To better understand REST APIs, let’s explore some key concepts:

1. Resources

Resources are the fundamental entities that a REST API exposes. These can be objects, data, or services, and they are identified by unique URLs. For example, in a blog application, resources could include articles, authors, and comments.

2. Endpoints

Endpoints are specific URLs that correspond to individual resources or collections of resources. For instance, a blog API might have endpoints like /articles, /articles/{id}, and /authors.

3. HTTP Methods

HTTP methods (GET, POST, PUT, DELETE) define the actions that can be performed on resources. For example, you might use a GET request to retrieve a list of articles (GET /articles), a POST request to create a new article (POST /articles), or a DELETE request to remove an article (DELETE /articles/{id}).

4. Representations

Resources can have multiple representations, such as JSON, XML, or HTML, depending on the client’s needs. Clients specify their desired representation in the HTTP request’s Accept header.

5. Statelessness

REST APIs are stateless, meaning that each request from a client to a server must contain all the information needed to understand and process the request. The server doesn’t store information about the client’s state between requests.

Why Are REST APIs Important?

REST APIs play a crucial role in modern web and application development for several reasons:

  1. Simplicity: RESTful APIs are easy to understand and use due to their simplicity and adherence to HTTP standards.
  2. Scalability: They are highly scalable, making it possible to serve a large number of clients without sacrificing performance.
  3. Interoperability: REST APIs can be consumed by a wide range of clients, including web browsers, mobile apps, and other software systems, making them highly interoperable.
  4. Statelessness: Stateless design simplifies server maintenance and scaling while also improving reliability and fault tolerance.
  5. Flexibility: REST APIs are not tied to a specific programming language or technology, allowing developers to choose the tools and frameworks that best suit their needs.

Conclusion

In today’s digital age, REST APIs have become the backbone of web and application development, enabling seamless communication between various software components. Understanding the fundamentals of REST, such as resources, endpoints, HTTP methods, and statelessness, is essential for any developer looking to build robust and efficient web services. As you continue your journey into the world of software development, REST APIs will undoubtedly play a vital role in your toolkit, facilitating the exchange of data and functionality across the vast landscape of the internet.

From the blog CS@Worcester – Kadriu's Blog by Arber Kadriu and used with permission of the author. All other rights reserved by the author.

The Art of Code Refactoring

Since we have been discussing refactoring in class recently, it got me interested in finding out more about what makes refactoring… well “refactoring”. I found this interesting article “Refactoring vs. Defactoring” by Nicolas Carlo, a French-Canadian Software Engineer, which describes the difference between refactoring and debugging while also introducing the idea of “defactoring”.

The article starts with the definition of refactoring, which according to Martin Fowler, is “a change made to the internal structure of software to make it easier to understand and cheaper to modify without changing its observable behavior“. In simpler terms, refactoring is all about tidying up the interior of a program while keeping the exterior the same.

Nicolas is clear however that fixing a bug, adding features, or changing features is not refactoring but points out the importance of refactoring code before altering the functionality of a program.

Nicolas states that in his experience things that can help with solidifying this distinction are to, start doing distinct commits separating both refactoring and change commits, make commits more frequently, prefix commit messages with R or C to specify the change(s) made, and learn how to use automated refactoring to improve the health of one’s code.

Nicolas also explains by following these practices he feels as if his work has become much safer and simpler than before. With his newfound awareness, he feels as if he can put the best quality into his work. He also gives a brief rundown of how thinking of Refactoring and Changes as two hats you wear when programming can also help increase developer awareness.

While we discussed refactoring, I thought it was interesting how Nicolas framed defactoring as an opposing process to refactoring in the title of the article but I came to find out it is not that at all. Defactoring is described by Nicolas as “cognitive refactoring” which is done by making the code less abstract in places where abstraction is no longer required.

He says that when working with legacy code, he notes some items such as temporary variables that are just not needed in places where they were necessary in the past. By altering code to remove such variables, Nicolas signifies this process as “defactoring” since it removes old abstractions that just are not needed anymore.

After reading this article I feel as if I have a stronger newfound understanding of the importance of separating refactoring from normal changes since it can make a dramatic difference in a program’s overall transparency. In my work alone, I have realized the importance of taking on one aspect at a time improves the cohesion and efficacy of the final product, but I never really thought about the importance of distinguishing changes and refactoring. Trying to be aware of this in the future will help me create the best version of my work possible by ensuring I have a more robust knowledge of a program’s behavior and added transparency of said program through my code alone.

Article Link: https://understandlegacycode.com/blog/refactoring-and-defactoring/

From the blog CS@Worcester – Eli's Corner of the Internet by Eli and used with permission of the author. All other rights reserved by the author.

Week 5 – A bit late but we’re getting there…

So it’s been a hot second since I set this blog up, and I apologize for the silence. Been busy focusing on homework and figuring out my work situation.

But with that aside, I just wanna talk about my past with GitHub and repositories before this class. I’ve actually used GitHub many times before, because I collaborate with a modding community. We focus on modding a video game known as Luxor, a classic PC game from the 2000s that I’ll share gameplay of below.

As for what a mod of this game entails, here’s an example of one of my favorites from recent, Hollow, made by my friend Dommo:

A lot of effort has been put into these mods, and I’ve contributed to a lot of them, and even made my own. I have no recordings of it, unfortunately, but I swear it exists, haha.

Though as of recent, we’ve been discussing how to properly archive mods. For the longest time, we’ve been using our Discord server for modding to store them, but that poses an issue: Many people might not have access to Discord due to their countries, operating systems, or various other reasons.

This led to some people moving over to GitHub, which was one of my first times learning how it actually properly worked. Before this, I simply downloaded stuff from it, but I learned the basics of how to push and pull repositories and have a local clone to work on and collaborate with multiple people.

Currently one of the biggest projects being developed using GitHub is OpenSMCE (https://github.com/jakubg1/OpenSMCE) which is a game engine being built off of the Love2D engine to allow us to have an opensource engine to work off of for our mods, as opposed to the limited and clunky engine we use currently with the original game.

The reason I discuss this is actually because the new information I’m learning in these classes is inspiring me to help work on and learn the process of being in a team working on a software/engine development with Jakub, the developer of OpenSMCE. This has been an application I’ve been very excited to see have a full release, and being able to say I contributed to it and helped it reach that state would be amazing.

Hopefully as the semester goes on, with the lessons I’m learning about how to create an application as well as work in a collaborative environment, I’ll end up contributing to this project, and maybe I can even use this blog as a way to discuss the ongoing developments and issues we’ve been facing with the development of OpenSMCE. It would be interesting, and I will probably reach out to Jakub within the next week about it.

Anyways, that’s all I have for this week, until next time!

-Tempura

From the blog CS@Worcester – You're Telling Me A Shrimp Wrote This Code?! by tempurashrimple and used with permission of the author. All other rights reserved by the author.

Data Redundancy – Relevance in Software Systems and Websites

In today’s world, businesses, organizations, and other entities that software and web developers consider “clients” heavily rely on being able to efficiently collect, access, and otherwise manage data for their day-to-day operations. For many, losing access to databases or similar outages hinders their ability to continue operations. In Data Redundancy: Meaning and Importance, author Charlotte White discusses data redundancy and some basic strategies and implementations to address these vulnerabilities.

Data Redundancy goes beyond simply having backups of existing data (although they’re an important component), it’s a proactive plan to prevent data loss and maintain smooth operations in the case of a server shutdown, hardware malfunction, or other major disruptive issue. It’s crucial for ensuring the continuity of business operations as website downtime often leads to financial losses especially for new websites or those with low traffic. Outages can also impact search engine rankings, as uptime is a factor commonly considered by search algorithms. Furthermore, failure to do so resulting in data loss can result in crashes/issues in other systems, loss of customer information, business details, and other critical and/or confidential information which is essential for an organization’s success and reputation.

How Redundancy Works: Effective redundancy designs reduce dependency on any single copy of data or data center. They commonly implement a 3-2-1 rule of backups, which means having three copies of data in two different locations, one of which is offline storage. Redundancy strategies should also consider factors like hardware redundancy; many servers use hard disk drives (HDDs) to store data which can fail due to simple wear and tear. Some hosting companies use RAID (Redundant Array of Independent Disks) and un-RAID solutions to mirror data from HDDs to other storage devices, minimizing the impact of HDD failures.

Recently in CS343, we’ve been looking at software architectures and strategies for organizing systems that could be realistically implemented to address clients’ needs. In particular, we’ve been considering the differences and strengths/weaknesses between a simpler architecture such as the Monolith versus a more complex architecture such as the MicroServices model, with several intercommunicating systems.

Most of the scenarios we discussed involved the ease of pushing out updates, but I was left wondering about the repercussions and ways to manage the possibility and reality of a database or system going totally offline. For businesses involved in eCommerce, uptime is money in terms of sales as well as maintaining search engine optimization. Given how damaging a disruption like this could be, data redundancy plans are an important consideration when planning and setting up a website or system. Understanding the value of D.R. and how they are implemented is an asset in planning and designing software systems and projects, and generally beneficial for computer science students and professionals.

Source:
1. Data Redundancy Meaning and Importance: A Complete Guide | ResellerClub India Blog

From the blog CS@Worcester – Tech. Worth Talking About by jelbirt and used with permission of the author. All other rights reserved by the author.