Category Archives: Week-14

Design Patterns and Code Smells

The first time I was taught about ‘Smells’ in code it was in connection Robert C. Martin’s “Agile Software Development, Principles, Patterns, and Practices“. For those who have not read it, starting from Rigidity we have Fragility, Immobility, Viscosity, Needless Complexity, Needless Repetition, and Opacity. All of these are categories of red flags in code, they are problematic because over time they make the code “hard to understand, modify, and maintain”. In an earlier post of mine, I talked about design patterns which are proven solutions to common coding problems. So when I saw an article talking about smells that can come from these design patterns I was intrigued. This eventually lead me to find out that the Singleton design pattern specifically is considered by many to be never good to use.

In an article titled “Examining the Pros and Cons of the Singleton Design Pattern“, Alex Mitchell first explains how the goal of the singleton design pattern is to “ensure a class has only one instance, and provide a global point of access to it.” He then goes on to list the pros, which, for those who do not know or remember, are; it insures only one instance exists throughout the code, it allows for that one instance to be called globally, and it limits access to that instance. Then he gets to the cons, first of which is it violates the single responsibility principle. Next up is the pattern’s tight coupling followed by how it complicates testing and finished with it obscuring dependencies. He then offers an alternative to singeltons in the form of Monostate or dependency injections and then a nice conclusion to wrap it up.

Lets go in order for the cons. Con #1: on a second look this seems obvious, the singleton class is simultaneously controlling its creation and managing access to itself. Con #2: again on second blush its because there only being one instance of the object means you cant use polymorphism or alternate implementations. Con #3: you cannot test in isolation since a singleton persists globally across tests. Con #4: the dependencies are not explicit when the singleton is used. The Monostate pattern allows for multiple instances to exist while having the same logical state, so while config1 and config2 can both change configValue, getting configValue from either config1 or config2 would return the same value. Dependency injection is, as far as I understand it, passing the singleton into the class that uses it, so rather than referencing the singleton it just has the singleton inside the class.

From this article I have come to a better understanding of dependency injection and will probably be using this framework in my future code since apparently a lot of code still uses the singleton pattern and dependency injection seems to best handle existing singletons.

Link:
https://expertbeacon.com/examining-the-pros-and-cons-of-the-singleton-design-pattern/

From the blog CS@Worcester – Coder's First Steps by amoulton2 and used with permission of the author. All other rights reserved by the author.

A License to Develop Software

I read a blog titled “Software License Management” by Samantha Rohn of Whatfix. It dives into the complexities of software licensing, explaining the different types of licenses and their implications. Since I’ve been learning about open-source projects and legal considerations in software development, this blog felt like an essential read. I picked this blog because software licensing is a topic that many developers, including myself, often overlook or misunderstand. In my coursework, we’ve briefly touched on the importance of licenses, but I never fully grasped the differences between them or their real-world applications. As I start working on team projects and open-source contributions, understanding how to navigate licensing is crucial to avoiding legal issues and contributing responsibly to the developer community.

The blog provides an overview of software licensing, emphasizing why it’s critical for both developers and organizations. It categorizes licenses into two main types:

  • Permissive Licenses: These allow more flexibility. Developers can modify, distribute, and use the software with minimal restrictions, often without the need to release their modifications.
  • Copyleft Licenses: These require derivative works to retain the original license terms. For example, modifications to a product under a copyleft license must also be distributed with the same license attached.

The post also introduces the concept of software license management, highlighting the need for organizations to track, organize, and comply with licenses to avoid legal and financial risks. It concludes with best practices for effective license management, such as inventorying all software assets and ensuring compliance with usage terms.

This blog was an eye-opener for me. One thing that stood out was the explanation of copyleft licensing. Before reading this, I didn’t realize how restrictive some licenses could be in terms of sharing modifications. For instance, if I modify software with a copyleft license, I’d have to release my work under the same license, which might limit its use in proprietary projects. This insight made me rethink how I approach licensing for my own projects.

I also found the section on license management practices especially relevant. As developers, we tend to focus solely on the technical aspects of coding and ignore legal considerations. However, knowing how to choose and manage licenses is equally important, especially as I start collaborating on larger projects.

This blog gave me a clearer understanding of how to responsibly use and share code. Moving forward, I’ll make sure to read and understand the terms of any license attached to the libraries and frameworks I use. Additionally, when I create software, I’ll carefully select a license that aligns with my goals, whether for open-source contribution or proprietary use. If you’re new to software licensing or want to understand how to manage licenses effectively, I recommend reading thisblog. It’s a straightforward guide to a topic every developer should know.

Resource:

https://whatfix.com/blog/software-license-management/#:~:text=For%20the%20most%20part%2C%20copyleft%20licensing%20is,with%20the%20source%20product’s%20copyleft%20license%20attached.

From the blog Computer Science From a Basketball Fan by Brandon Njuguna and used with permission of the author. All other rights reserved by the author.

Do You Smell That?

In software development, code smells are subtle yet significant indicators of potential problems within a codebase. Much like how an unpleasant odor hints at deeper issues, code smells signal areas in the code that might lead to bigger challenges if left unaddressed. The article linked below is exploration of this concept, highlighting the importance of […]

From the blog CS@Worcester – CurrentlyCompiling by currentlycompiling and used with permission of the author. All other rights reserved by the author.

Semantics Antics

Recently, I came across an interesting blog post titled “A Beginner’s Guide to Semantic Versioning” by Victor Pierre. It caught my attention because I’ve been learning about software development best practices, and versioning is a fundamental yet often overlooked topic. The blog simplifies a concept that is vital for managing software releases and ensuring compatibility across systems. I selected this post because, in my current coursework, semantic versioning keeps appearing in discussions about software maintenance and deployment. I’ve encountered terms like “major,” “minor,” and “patch” versions while working on team projects, but I didn’t fully understand their significance or how to apply them effectively. This guide promised to break down the topic in a beginner-friendly way, and it delivered.

The blog explains semantic versioning as a standardized system for labeling software updates. Versions follow a MAJOR.MINOR.PATCH format, where:

  • MAJOR: Introduces changes that break backward compatibility.
  • MINOR: Adds new features in a backward-compatible way.
  • PATCH: Fixes bugs without changing existing functionality.

The post emphasizes how semantic versioning helps both developers and users by setting clear expectations. For example, a “2.1.0” update means the software gained new features while remaining compatible with “2.0.0,” whereas “3.0.0” signals significant changes requiring adjustments. The author also highlights best practices, such as adhering to this structure for open-source projects and communicating changes through release notes.

Reading this blog clarified a lot for me. One key takeaway is how semantic versioning minimizes confusion during development. I realized that in my past group projects, we sometimes struggled to track changes because we didn’t use a structured versioning approach. If a teammate updated a module, we often didn’t know if it introduced breaking changes or just fixed minor issues. Incorporating semantic versioning could have streamlined our collaboration.

I also appreciated the blog’s simplicity. By breaking down each component of a version number and providing examples, the post made a somewhat abstract topic relatable. It reminded me that software development isn’t just about writing code but also about maintaining and communicating it effectively.

Moving forward, I plan to adopt semantic versioning in my personal projects and advocate for it in team settings. Using clear version numbers will make my code more maintainable and professional, especially as I contribute to open-source projects. If you’re looking to deepen your understanding of software versioning or improve your development workflow, I highly recommend checking out Victor Pierre’s blog. It’s a quick, insightful read that makes a technical topic approachable.

Resource:

https://victorpierre.dev/blog/beginners-guide-semantic-versioning/

From the blog Computer Science From a Basketball Fan by Brandon Njuguna and used with permission of the author. All other rights reserved by the author.

Creating a Smooth Web Experience: Frontend Best Practices

A key aspect of frontend development is creating websites that perform well and provide a seamless user experience. However, due to time constraints in class, we didn’t have many opportunities to dive deeply into frontend implementation techniques. To fill this gap, I explored the blog Frontend Development Best Practices: Boost Your Website’s Performance. It’s clear explanations, organized structure, and bold highlights made it an excellent resource for enhancing my understanding of this crucial topic.

The blog provides a detailed guide on optimizing website performance using effective frontend development techniques. Key recommendations include using appropriate image formats like JPEG for photos, compressing files with tools like TinyPNG, and utilizing lazy loading to improve speed and save bandwidth. It stresses reducing HTTP requests by combining CSS and JavaScript files and using CSS sprites to streamline server interactions and boost loading speed. Another important strategy is enabling browser caching, which allows browsers to locally store static assets, reducing redundant data transfers and improving load times. The blog also suggests optimizing CSS and JavaScript by making files smaller, loading non-essential scripts only when needed, and using critical CSS to improve initial rendering speed. Additional practices include leveraging content delivery networks (CDNs) to deliver files from servers closer to users and employing responsive design principles, such as flexible layouts and mobile-first approaches, to create adaptable websites.

I chose this blog because it addresses frontend implementation topics that were not deeply explored in our course. Its organized layout, with bold headings and step-by-step instructions, makes the content accessible and actionable. As someone who plans to build a website in the future, I found its advice easy to understand.

Reading this blog was incredibly insightful. I learned how even small adjustments—such as choosing the right image format or enabling lazy loading—can significantly improve website performance. For example, understanding browser caching taught me how to make websites load faster and enhance the experience for returning users. The section on responsive web design stood out, emphasizing the importance of creating layouts that work seamlessly across different devices. The blog’s focus on performance monitoring and continuous optimization also aligned with best practices for maintaining high-performing websites. Tools like Google PageSpeed Insights and A/B testing offer valuable feedback to help keep websites efficient and user-focused over time.

In my future web development projects, I plan to implement the best practices outlined in the blog. This includes using image compression tools and lazy loading to improve loading times, combining and minifying CSS and JavaScript files to reduce HTTP requests, and utilizing CDNs alongside browser caching for faster delivery of static assets. I will also adopt a mobile-first approach to ensure websites function smoothly across all devices.

This blog has provided invaluable insights into frontend development, equipping me with practical strategies to optimize website performance. By applying these techniques, I aim to create websites that not only look appealing but also deliver an exceptional user experience.

From the blog CS@Worcester – Live Laugh Code by Shamarah Ramirez and used with permission of the author. All other rights reserved by the author.

YAGNI – It could be a matter of life or death (or profits)

YAGNI – You aren’t going to need it. In the simplest terms, the principle (initially created by Ron Jefferies) means don’t overengineer before it is necessary. It means to find the solution that works for what is needed today without engineering for POTENTIAL future scenarios. Outside of software engineering, this principle is applicable to everyday […]

From the blog CS@Worcester – CurrentlyCompiling by currentlycompiling and used with permission of the author. All other rights reserved by the author.

The Clean Code Help

The FreeCodeCamp article looks into helpful techniques for writing clean code, focusing on readability, simplicity, and naming conventions. It shows that writing clean code involves more than just making sure the code works. Using descriptive variable names, staying away from extremely complicated reasoning, and following recognized style rules are important pointers. The Medium essay supports this by going over fundamental ideas like the significance of refactoring, DRY (Don’t Repeat Yourself), and KISS (Keep It Simple, Stupid). Also, it emphasizes the human element of software development, clean code encourages teamwork and lowers technical debt. 

I chose these articles because they bring useful information into an important part of software development that is sometimes overlooked, especially with people who think they have finished code. They bring a deeper meaning to the information we learned in class about clean code and also how I should go about looking and editing any of the codes I make in the future. Another reason why I chose these articles is because they give useful knowledge about a sometimes disregarded aspect of software development. These materials show that clean code is about more than just functioning; it’s also about sustainability, readability, and future-proofing. Many developers may think their code is “finished” after it works well. These principles expand on the clean code writing lessons we acquired in class and provide a more comprehensive viewpoint on how to approach coding and editing in future projects. I found these tools to be a useful reminder of the importance of prioritizing clarity and maintainability above complicated solutions. For example, in previous classes where I needed to code, I often prioritized utility over the potential interpretation of my work. After reading these articles, I realized how important modular design and naming standards are for securing and debugging as well as group projects. 

Among the ideas that resonated deeply was “refactoring as a discipline.” Beyond just cleaning up code, refactoring provides an opportunity to reevaluate and just look over my coding another time. Refactoring provides a chance to reconsider and enhance a project’s overall structure. This viewpoint changed the way I approach coding. I now consider refactoring to be an essential step in maintaining long-term code quality rather than a tiresome task. It promotes a proactive approach to ongoing development. Going forward, I plan to integrate these principles into my coding practice. I will be more intentional about naming conventions, structuring code logically, and refactoring regularly. By doing this, I hope to create code that is not only functional but also clear, maintainable, and ready for future development.

From the blog CS@Worcester – A Bostonians Blogs by Abdulhafeedh Sotunbo and used with permission of the author. All other rights reserved by the author.

The Human Aspect of Software Efficiency: Managing Your Software Team

Intro

Managing a team effectively is essential for delivering high-quality projects on time. A mismanaged team can lead to needless loss of productivity, whether it’s because of confusion between the team, or wasting time on useless tasks. This blog by Ben Brigden delves into strategies and practices that help teams collaborate efficiently, maintain focus, and deliver exceptional results.

Summary of the Source

The blog explores 9 tips of managing software development teams, but these are the main 5 I think stood out:

  1. Clearly Define Goals and Expectations: Outlining precise objectives ensures everyone on the team is aligned and working toward the same outcomes.
  2. Understand the Expertise of Your Team: Recognizing each team member’s strengths and specialties allows managers to delegate tasks effectively and maximize productivity.
  3. Protect Your Team from Busy Work: Shielding developers from unnecessary tasks helps them focus on meaningful, high-impact work.
  4. Emphasize Autonomy and Self-Reliance: Encouraging team members to take ownership of their tasks fosters independence and builds trust.
  5. Measure Performance and Strive for Continuous Improvement: Using performance metrics and retrospectives ensures progress and helps refine team processes.

Why I Chose This Blog

This Blog is a good resource that goes over the top points about working in a software team effectively. It talks about all of the aspects of being in a software team that someone might want to know to better understand their role.

Reflection of the Content

The blog emphasized that team success hinges on communication and collaboration. One point that stood out was the importance of defining goals and expectations clearly. This is probably the most important thing in a software team. Without goals the developers become aimless and don’t know what to do or how what they are doing fits into an overarching plan. This is why agile development and scrum in particular is effective, as it sets clear goals within a certain time frame, with a definition of done, so everyone is on the same page of what needs to be done and what it means for their task to be completed.

The emphasis on autonomy is probably underrated in team environments, because I think people see the word team and assume that everyone has to know what everyone else is doing. Being trusted with your own work probably has a physiological effect on productivity as well, where when someone is given responsibility, they are probably more likely to live up to their potential as opposed to if they are being babied.

Future Application

This blog has helped me better understand the ways a team should function to maximize its potential, and seeing as I plan to work in the tech industry, I think it’s valuable to know this to help myself and my future team work to our potential.

Citation

9 tips to manage your software development team (no coding required) by Ben Brigden

https://www.teamwork.com/blog/software-development-team-management/.

From the blog CS@Worcester – The Science of Computation by Adam Jacher and used with permission of the author. All other rights reserved by the author.

Good Git Resources to Help Beginners Learn

recently read the article “Git Best Practices – A Guide to Version Control for Beginners” on freeCodeCamp.org. This piece offers a comprehensive introduction to Git, emphasizing essential practices for effective version control.

I chose this article because, as a newcomer to software process management, I wanted to understand how Git can enhance collaboration and efficiency in development projects. The article’s focus on best practices provided a clear roadmap for integrating Git into my workflow.

The content delves into fundamental Git concepts, such as initializing repositories, committing changes, and branching strategies. It underscores the importance of clear commit messages and regular repository maintenance. A key takeaway for me was the significance of atomic commits—ensuring each commit represents a single, logical change. This practice not only simplifies tracking changes but also aids in pinpointing issues during code reviews.

The article also highlights the role of branching in facilitating parallel development. Understanding how to create and manage branches allows for isolated feature development, reducing the risk of conflicts in the main codebase. This insight has reshaped my approach to project structuring, making me more confident in handling complex tasks.

Reflecting on the material, I’ve realized the transformative impact of adhering to Git best practices. They not only streamline the development process but also foster better team collaboration. Moving forward, I plan to implement these practices diligently, aiming to contribute more effectively to projects and enhance overall code quality.

For those interested in exploring this topic further, I recommend reading the full article on freeCodeCamp.org: Git Best Practices – A Guide to Version Control for Beginners.

From the blog CS@Worcester – Zacharys Computer Science Blog by Zachary Kimball and used with permission of the author. All other rights reserved by the author.

Docker Servers Under Attack

This week, I came across an article discussing how attackers were targeting Docker remote API servers recently. Docker is something we have discussed in my CS 348 class, so this was immediately intriguing. I know that Docker is used on many projects so that teams can all work with the same software. Obviously, attacks on these servers is something of great concern. The article begins by stating the issue and giving an example of a recent attack. It quickly jumps into explaining the process of how these attackers are pulling this off. The attack starts with a ping to the remote API server. Once they are able to get the information from the server, they create a container with the same name and allow themselves access to privileged mode. From there, the attackers have the reigns to complete the attack. The article then goes on to show how exactly the attackers do it with shell scripts and examples. It concludes with a list of recommendations on how to prevent these attacks for your own Docker remote API servers.

I found this article quite interesting for a couple of reasons. First being that, as I am still new to Docker and its features, I was unaware that it was susceptible to attacks such as this. Now, I am aware that it is not a normal occurrence, it was still surprising to me. However, I am now aware that whoever is running the server must make sure to configure the settings properly and pay attention to the server. Another reason I found this interesting is that I also have an interest in cybersecurity and networking. Not only was I able to learn more about what we had talked about in class, but also what I am learning outside of class.

There was a good amount of knowledge to take away from this article. I learned that even in software created by and for computer scientists, you can’t trust it blindly. This is not to knock Docker, but more of a reminder to myself, as it is something I am responsible for, not the software. It also shows how much more there is to being a computer scientist than just writing code, and if that is the only responsibility you prioritize, it will prove to be problematic for you and those you are working with. It was also pretty cool for me to see the actual scripting used for these attacks as I am learning more about cybersecurity.

URL: https://www.trendmicro.com/en_us/research/24/j/attackers-target-exposed-docker-remote-api-servers-with-perfctl-.html

From the blog CS@Worcester – Auger CS by Joseph Auger and used with permission of the author. All other rights reserved by the author.