Category Archives: CS-343

Understanding Technical Debt

In software development, technical debt refers to the extra work required in the future when quick, easy solutions are chosen instead of more thorough, time-consuming approaches. This concept is explored in the article “What is Technical Debt & How Can Companies Manage It?” by Coder Academy.

The article defines technical debt as the accumulation of suboptimal code or design choices made to deliver projects faster. While these shortcuts may lead to quicker releases, they often result in higher maintenance costs and reduced code quality over time.

Key Factors Contributing to Technical Debt

  • Time Constraints: Meeting tight deadlines can lead to hasty decisions that prioritize speed over quality.
  • Evolving Requirements: As project requirements change, older code may no longer align with the current needs.
  • Lack of Documentation: Poor documentation can create misunderstandings and increase errors, contributing to technical debt.

Strategies for Managing Technical Debt

The article suggests several ways to manage technical debt effectively:

  • Regular Code Reviews: Consistently reviewing code helps identify and address suboptimal practices early.
  • Refactoring: Improving existing code without changing its functionality can enhance readability and maintainability.
  • Comprehensive Documentation: Thorough documentation supports better understanding and future modifications.
  • Prioritization: Address technical debt based on its impact on the overall project’s progress and quality.

I chose this article because technical debt is a common issue in software development and is closely related to our course material. For Computer Science students, learning how to avoid technical debt is critical. If technical debt becomes a habit, it can lead to poor time management, less active learning, and weak decision-making skills. Understanding its causes and management strategies is essential for maintaining code quality and ensuring project success.

The article provided valuable insights into technical debt and its consequences. I learned that while quick fixes may save time initially, they often lead to higher maintenance efforts and system issues later. The importance of regular code reviews and refactoring stood out, as these practices can help reduce technical debt and improve code quality.

I also appreciated the visual diagrams and tables in the article, which made it easier to understand what technical debt is and how to manage it. I particularly liked the advice on avoiding technical debt and understanding its long-term impacts as a programmer.

By adopting the strategies outlined in the article, I aim to contribute to developing sustainable, high-quality software solutions. This knowledge will help me avoid accumulating technical debt in my future work. I am motivated to build new habits, such as maintaining good documentation, participating in regular code reviews, and prioritizing refactoring. These practices will help me become a more effective and skilled programmer.

Sources:
What is Technical Debt, and how can you manage it?

Citation:
Academy, Coder. “What Is Technical Debt, and How Can You Manage It?” Medium, Medium, 18 May 2016, medium.com/@coderacademy/what-is-technical-debt-how-can-companies-manage-it-1af08992f6d0. 

From the blog CS@Worcester – CodedBear by donna abayon and used with permission of the author. All other rights reserved by the author.

REST API History and Fundamentals

We have discussed REST APIs in class and are learning to use them with a practical, hands-on approach. This approach is faster for the course and is better for my understanding of the material, but I thought it would be a good idea to learn the history of REST APIs and expand my general knowledge of the subject. I found The Postman Team’s blog post, “What Is a REST API? Examples, Uses, and Challenges,” which discusses the fundamentals of REST APIs, including their history, standards, functionality, uses, benefits, and challenges.

The blog starts by explaining how REST APIs evolved from a simpler alternative to SOAP APIs, becoming the backbone of the modern web. With the advances in the internet during the early 2000s, developers and companies needed to be adaptable, which SOAP API was not the tool for. The post expands on the disadvantages of SOAP APIs and explains why they were less valuable at the time. SOAP APIs were not scalable and too restrictive, making it difficult for developers to innovate. REST APIs, on the other hand, gave companies and developers the flexibility and scalability needed to keep up with the internet as it evolved. Another key feature it had was independence. REST APIs are not tied to a platform or language and have client-server separation. The adaptability and simplicity made REST APIs an obvious choice for companies like eBay and Amazon.

The post also discusses some of the challenges involved with REST APIs. Some include maintaining consistent endpoints, managing multiple versions, and dealing with differing authentication methods. All these issues complicate the development and maintenance of REST APIs which is why there are standards used to mitigate problems in the future.

The blog’s final section showed REST API examples, including Amazon S3, Twitter, Instagram, and Plaid. It explained how each API has its own specialization, like Twitter’s registration process or Amazon’s ability for developers to incorporate AI.

What I found most interesting about the blog post was the timeline for REST APIs’ evolution, the big names that pushed REST APIs forward, and the reasons behind their actions. Major companies we know today, like eBay and Amazon, got ahead of the curve, leading the way for companies like Twitter and Facebook. Learning the reason behind its success and its practical uses was interesting. I will use what I’ve learned to approach future projects with a stronger understanding of REST APIs, ensuring I design scalable, adaptable, and efficient code.

Blog: https://blog.postman.com/rest-api-examples/

From the blog CS@Worcester – KindlCoding by jkindl and used with permission of the author. All other rights reserved by the author.

Week 14

We have collaborated on the backend for the last few weeks. It is the central workload of our work, so I wanted to find an article about it. It very much intertwined with what were doing in class and outside of class with the homework. It is a great opportunity to see other people’s experiences working in the back end and real-life experiences. You can understand more things that we didn’t dive into the class by doing research and expanding our knowledge. That is why this week I found an article that specifically goes into detail about backend development.

The article starts by mentioning the importance of the backend and how it’s often overlooked because most of the spotlight is on the front end. The back end is like what is under the hood of a car you are happy when it works without having to open the hood. That being said the front end and back end work in tandem it’s not always necessary but for this scenario yes. The front is more the user-facing elements of a website. Like the text that is being displayed, graphics, buttons, and or anything the user interacts with.While the backend focuses on the behind-the-scenes work to make the website function. Outside of a car is the front end and its engine and other components are the back end. The backend is important to complete any user request by being safe and efficient. Security and efficiency are key processes of the backend for the user experience. This is why both backend and frontend developers must work in unison to create successful applications. The main importance backend developers should go for is innovation. Technology is always evolving and people must adapt to it becoming stagnant won’t be successful in this field.

Reading this article made me understand more about backend development. Backend development has so much more to it with data and security. It makes sense because security is often overlooked at times. The more information is stored online the more we have to make the effort to secure people’s data. Nobody will want to use your application if there is a breach of security. My main takeaway was their statement about innovation. Their final message to the reader was a hopeful one stating that a developer must change with the times because they are in the epicenter of it. Technology goes far out including healthcare solutions that might not be important to some but are highly integral to a lot of people.  

https://www.ciat.edu/blog/understanding-backend-development/

From the blog CS@Worcester – DCO by dcastillo360 and used with permission of the author. All other rights reserved by the author.

Design Patterns and Code Smells

The first time I was taught about ‘Smells’ in code it was in connection Robert C. Martin’s “Agile Software Development, Principles, Patterns, and Practices“. For those who have not read it, starting from Rigidity we have Fragility, Immobility, Viscosity, Needless Complexity, Needless Repetition, and Opacity. All of these are categories of red flags in code, they are problematic because over time they make the code “hard to understand, modify, and maintain”. In an earlier post of mine, I talked about design patterns which are proven solutions to common coding problems. So when I saw an article talking about smells that can come from these design patterns I was intrigued. This eventually lead me to find out that the Singleton design pattern specifically is considered by many to be never good to use.

In an article titled “Examining the Pros and Cons of the Singleton Design Pattern“, Alex Mitchell first explains how the goal of the singleton design pattern is to “ensure a class has only one instance, and provide a global point of access to it.” He then goes on to list the pros, which, for those who do not know or remember, are; it insures only one instance exists throughout the code, it allows for that one instance to be called globally, and it limits access to that instance. Then he gets to the cons, first of which is it violates the single responsibility principle. Next up is the pattern’s tight coupling followed by how it complicates testing and finished with it obscuring dependencies. He then offers an alternative to singeltons in the form of Monostate or dependency injections and then a nice conclusion to wrap it up.

Lets go in order for the cons. Con #1: on a second look this seems obvious, the singleton class is simultaneously controlling its creation and managing access to itself. Con #2: again on second blush its because there only being one instance of the object means you cant use polymorphism or alternate implementations. Con #3: you cannot test in isolation since a singleton persists globally across tests. Con #4: the dependencies are not explicit when the singleton is used. The Monostate pattern allows for multiple instances to exist while having the same logical state, so while config1 and config2 can both change configValue, getting configValue from either config1 or config2 would return the same value. Dependency injection is, as far as I understand it, passing the singleton into the class that uses it, so rather than referencing the singleton it just has the singleton inside the class.

From this article I have come to a better understanding of dependency injection and will probably be using this framework in my future code since apparently a lot of code still uses the singleton pattern and dependency injection seems to best handle existing singletons.

Link:
https://expertbeacon.com/examining-the-pros-and-cons-of-the-singleton-design-pattern/

From the blog CS@Worcester – Coder's First Steps by amoulton2 and used with permission of the author. All other rights reserved by the author.

Do You Smell That?

In software development, code smells are subtle yet significant indicators of potential problems within a codebase. Much like how an unpleasant odor hints at deeper issues, code smells signal areas in the code that might lead to bigger challenges if left unaddressed. The article linked below is exploration of this concept, highlighting the importance of […]

From the blog CS@Worcester – CurrentlyCompiling by currentlycompiling and used with permission of the author. All other rights reserved by the author.

Semantics Antics

Recently, I came across an interesting blog post titled “A Beginner’s Guide to Semantic Versioning” by Victor Pierre. It caught my attention because I’ve been learning about software development best practices, and versioning is a fundamental yet often overlooked topic. The blog simplifies a concept that is vital for managing software releases and ensuring compatibility across systems. I selected this post because, in my current coursework, semantic versioning keeps appearing in discussions about software maintenance and deployment. I’ve encountered terms like “major,” “minor,” and “patch” versions while working on team projects, but I didn’t fully understand their significance or how to apply them effectively. This guide promised to break down the topic in a beginner-friendly way, and it delivered.

The blog explains semantic versioning as a standardized system for labeling software updates. Versions follow a MAJOR.MINOR.PATCH format, where:

  • MAJOR: Introduces changes that break backward compatibility.
  • MINOR: Adds new features in a backward-compatible way.
  • PATCH: Fixes bugs without changing existing functionality.

The post emphasizes how semantic versioning helps both developers and users by setting clear expectations. For example, a “2.1.0” update means the software gained new features while remaining compatible with “2.0.0,” whereas “3.0.0” signals significant changes requiring adjustments. The author also highlights best practices, such as adhering to this structure for open-source projects and communicating changes through release notes.

Reading this blog clarified a lot for me. One key takeaway is how semantic versioning minimizes confusion during development. I realized that in my past group projects, we sometimes struggled to track changes because we didn’t use a structured versioning approach. If a teammate updated a module, we often didn’t know if it introduced breaking changes or just fixed minor issues. Incorporating semantic versioning could have streamlined our collaboration.

I also appreciated the blog’s simplicity. By breaking down each component of a version number and providing examples, the post made a somewhat abstract topic relatable. It reminded me that software development isn’t just about writing code but also about maintaining and communicating it effectively.

Moving forward, I plan to adopt semantic versioning in my personal projects and advocate for it in team settings. Using clear version numbers will make my code more maintainable and professional, especially as I contribute to open-source projects. If you’re looking to deepen your understanding of software versioning or improve your development workflow, I highly recommend checking out Victor Pierre’s blog. It’s a quick, insightful read that makes a technical topic approachable.

Resource:

https://victorpierre.dev/blog/beginners-guide-semantic-versioning/

From the blog Computer Science From a Basketball Fan by Brandon Njuguna and used with permission of the author. All other rights reserved by the author.

Creating a Smooth Web Experience: Frontend Best Practices

A key aspect of frontend development is creating websites that perform well and provide a seamless user experience. However, due to time constraints in class, we didn’t have many opportunities to dive deeply into frontend implementation techniques. To fill this gap, I explored the blog Frontend Development Best Practices: Boost Your Website’s Performance. It’s clear explanations, organized structure, and bold highlights made it an excellent resource for enhancing my understanding of this crucial topic.

The blog provides a detailed guide on optimizing website performance using effective frontend development techniques. Key recommendations include using appropriate image formats like JPEG for photos, compressing files with tools like TinyPNG, and utilizing lazy loading to improve speed and save bandwidth. It stresses reducing HTTP requests by combining CSS and JavaScript files and using CSS sprites to streamline server interactions and boost loading speed. Another important strategy is enabling browser caching, which allows browsers to locally store static assets, reducing redundant data transfers and improving load times. The blog also suggests optimizing CSS and JavaScript by making files smaller, loading non-essential scripts only when needed, and using critical CSS to improve initial rendering speed. Additional practices include leveraging content delivery networks (CDNs) to deliver files from servers closer to users and employing responsive design principles, such as flexible layouts and mobile-first approaches, to create adaptable websites.

I chose this blog because it addresses frontend implementation topics that were not deeply explored in our course. Its organized layout, with bold headings and step-by-step instructions, makes the content accessible and actionable. As someone who plans to build a website in the future, I found its advice easy to understand.

Reading this blog was incredibly insightful. I learned how even small adjustments—such as choosing the right image format or enabling lazy loading—can significantly improve website performance. For example, understanding browser caching taught me how to make websites load faster and enhance the experience for returning users. The section on responsive web design stood out, emphasizing the importance of creating layouts that work seamlessly across different devices. The blog’s focus on performance monitoring and continuous optimization also aligned with best practices for maintaining high-performing websites. Tools like Google PageSpeed Insights and A/B testing offer valuable feedback to help keep websites efficient and user-focused over time.

In my future web development projects, I plan to implement the best practices outlined in the blog. This includes using image compression tools and lazy loading to improve loading times, combining and minifying CSS and JavaScript files to reduce HTTP requests, and utilizing CDNs alongside browser caching for faster delivery of static assets. I will also adopt a mobile-first approach to ensure websites function smoothly across all devices.

This blog has provided invaluable insights into frontend development, equipping me with practical strategies to optimize website performance. By applying these techniques, I aim to create websites that not only look appealing but also deliver an exceptional user experience.

From the blog CS@Worcester – Live Laugh Code by Shamarah Ramirez and used with permission of the author. All other rights reserved by the author.

YAGNI – It could be a matter of life or death (or profits)

YAGNI – You aren’t going to need it. In the simplest terms, the principle (initially created by Ron Jefferies) means don’t overengineer before it is necessary. It means to find the solution that works for what is needed today without engineering for POTENTIAL future scenarios. Outside of software engineering, this principle is applicable to everyday […]

From the blog CS@Worcester – CurrentlyCompiling by currentlycompiling and used with permission of the author. All other rights reserved by the author.

The Backend Communication Necessity: REST APIs

Introduction

APIs are the most important piece of communication between software applications. REST APIs, in particular, have emerged as the standard for building web services due to their simplicity and scalability. This blog by John Au-Yeung explores best practices for efficient REST APIs, a topic that is essential for modern software development.

Summary Of The Source

  1. Accept and Respond with JSON: JSON is the standard format for APIs due to its readability and compatibility with most programming languages.
  2. Use Nouns Instead of Verbs in Endpoint Paths: Resources should be represented as nouns in endpoint paths, such as /users or /orders, for clarity and consistency.
  3. Handle Errors Gracefully and Return Standard Error Codes: APIs should provide clear error messages and use appropriate status codes, like 404 for not found or 500 for server errors.
  4. Maintain Good Security Practices: Implement authentication methods such as OAuth, encrypt sensitive data, and use rate limiting to prevent abuse.
  5. Versioning Our APIs: Proper versioning, such as including the version in the URL (/v1/users), allows APIs to evolve without disrupting existing integrations.

Why I Chose This Blog

I selected this blog because REST APIs are integral to modern software development, and understanding their design is essential for building scalable and maintainable systems. The blog provides a good understanding of REST APIs for developers at all levels.

Reflection On The Blog

The blog went over the standards when designing REST APIs. One aspect that resonated with me was the emphasis on clarity and simplicity in API structure. For instance, using nouns like /users instead of verbs like /getUsers for endpoint paths. Another valuable takeaway was the focus on error handling and standard status codes. Before reading this, I hadn’t fully appreciated how critical it is to provide meaningful error responses to help developers debug issues. I now recognize how returning clear messages and consistent codes can improve the user experience and reduce confusion for developers. The section on API versioning was also particularly insightful, as I hadn’t previously considered how unversioned APIs could lead to breaking changes when updates are made. This made me realize the importance of planning for future iterations during the initial API design process.

Future Application

By adopting JSON as the default format and carefully designing resource-based endpoints, I aim to create APIs that are in line with all that standards laid out in this blog. I will also make sure to maintain good security practices, such as implementing authentication. Additionally, I will incorporate API versioning to ensure compatibility with older clients as updates are introduced.

Citation

Best practices for REST API design by John Au-Yeung

From the blog CS@Worcester – The Science of Computation by Adam Jacher and used with permission of the author. All other rights reserved by the author.

Whats that smell

Writing good clean code is a work of art that takes a lot of practice and understanding. One of the first practices you should use is what’s called design smells. Design smells are indicators of possible poor design principles that will eventually impact the quality of your project.

We’ve all had that point when we first started learning how to write code where we wrote some pretty ugly “spaghetti”  code. To start, the names of these smells are rigidity, fragility, immobility, viscosity, needless complexity, needless repetition and opacity. Most of these tend to go hand in hand or could stand alone. 

It’s good to know their meanings before talking about their importance.

Rigidity: when your software is difficult to change in even the simplest ways. 

Fragility: having a tendency for your program to break in other places when you make a change.

Immobility: having useful code that could be used in other systems, but can’t be integrated very easily

Viscosity: when it’s not easy to make only one change.

Needless complexity: containing things that don’t really have any use yet.

Needless repetition: repeated code that could be abstracted instead or written over and over.

Opacity: not visually clear.

While the use of being able to identify these design smells may be obvious to some. We will start with some examples before ending with why identifying these smells will be helpful in the long run.

When first starting a project sometimes it’s hard to know where to really start. That is why developers have tools such as class diagrams, but that is for another blog. Starting off with no real design in mind it’s easy to have needless repetition pop up. Once you’ve identified you can break certain things off and turn them into their own methods, you may also realize it has caused rigidity and fragility. Moving those lines might be much harder than you think. Those problems also lead into viscosity if you end up with a big enough mess (speaking from experience).

Next is the problem of needless complexity. Without having a class diagram you might find yourself jumping ahead to try and fix a problem or do something you don’t have a need for yet. The problem is exacerbated if you don’t end up needing it at all. No one likes useless code. This can lead to opacity when your code is making the goals and intents of your work unclear.

With the previously discussed class diagrams (see previous blog) to guide you. Along with an understanding of poor design principles it’s easy to see how being able to identify these smells can keep your work in much better shape for yourself and others. We have stuck with the design process for now, a future blog topic will cover how to make the code itself “cleaner”.

From the blog Mikes CS 343 by Michael St. Germain and used with permission of the author. All other rights reserved by the author.