Category Archives: Week 11

Software Design Principles

For this week, I decided to find a blog about Software design principles to refresh my mind on the topic. I found a blog called “Main Software Design Principles Every Developer Should Know” by Amr Saafan. The reason why I decided to discuss this blog Specifically is because it breaks down each section in short simple bullet points.  

The first section is Object-Oriented Programming (OOP) principles. This section explains principles like Encapsulation which guarantees that objects securely manage their data and behavior while concealing their complexity and granting limited access. Next is abstraction- by concealing details and concentrating on key elements, abstraction makes problem-solving easier and allows for solutions that are extendable and reusable. Inheritance encourages code reuse by enabling classes to share properties and functions, whereas polymorphism enables flexible behavior depending on object types, increasing flexibility and minimizing code duplication. I remembered these principles the most, possibly because I’ve had plenty of discussions about it, however, I found the explanation of encapsulation very helpful.

Software design is further improved by the SOLID principles. In order to improve testability and maintenance, the Single Responsibility Principle (SRP) promotes modular classes with distinct responsibilities. In order to maintain stability, the Open/Closed Principle (OCP) advises adding functionality through new code as opposed to changing current code. The Interface Segregation Principle (ISP) promotes short, targeted interfaces, minimizing needless dependencies, whereas the Liskov Substitution Principle (LSP) preserves compatibility between base and derived classes. The Dependency Inversion Principle (DIP), which prioritizes abstractions over concrete implementations, encourages loose coupling. I needed a refresher of LSP and ISP the most out of this section and I found the explanations to be helpful. 

The next section (Don’t Repeat Yourself (DRY)), explains how it is useful to eliminate redundancy, which promotes simpler, maintainable, and error-resistant code through modularity and reuse. (Keep It Simple, Stupid  (KISS)) promotes straightforward, effective solutions that are simpler to comprehend and maintain, while discouraging overengineering. The You Aren’t Gonna Need It (YAGNI) principle suggests avoiding unneeded features in advance, simplifying things, and concentrating on urgent needs. Those principles felt self explanatory to me but they were explained well. The Law of Demeter (LoD) improves modularity and decreases coupling by restricting an object’s interactions to its immediate dependents. I definitely needed the explanation of this principle. I found it helpful to have the list of how to apply it which included items like Limit Direct Dependencies, Use Interfaces and Avoid Chain Calls. It aided in my understanding of the principle.

Next, the Composition Over Inheritance principle suggests building systems with reusable components rather than deep inheritance hierarchies, improving flexibility and modularity. Finally, the Principle of Least Astonishment (POLA) ensures software behaves as expected, minimizing confusion by being clear, consistent, and intuitive.

As a software developer, these design principles will help design scalable, flexible, and user-friendly systems. KISS and YAGNI simplify designs to avoid unnecessary complexity, while composition and encapsulation add flexibility. Following POLA makes the system easier to understand and use for both developers and users.

From the blog CS@Worcester – Live Laugh Code by Shamarah Ramirez and used with permission of the author. All other rights reserved by the author.

Masters in Scrum

One method I’ve encountered repeatedly in both my coursework and during discussions with peers is Agile—specifically, the Scrum framework. To better understand it, I recently read an article titled “Scrum Mastering the 3 Pillars, 5 Values, and 7 Key Principles of Agile Project Management”, which provides a clear breakdown of how Scrum works and why it’s so effective in software development. I found this resource insightful, and it’s something I can definitely apply in my future

The article explains the fundamental elements of Scrum, which include the 3 Pillars, 5 Values, and 7 Key Principles that form the foundation of this Agile framework. The 3 Pillars—Transparency, Inspection, and Adaptation—ensure that the process is open, regularly assessed, and flexible. The 5 Values—Commitment, Courage, Focus, Openness, and Respect—help create a collaborative and supportive team environment. Finally, the 7 Key Principles emphasize continuous improvement, self-organizing teams, and the importance of simplicity in problem-solving.

I selected this article because, as a beginner in computer science, I wanted to understand how project management frameworks like Scrum can be applied in real-world software development. Being new to coding and programming, I often feel overwhelmed by the amount of information and tools available. Scrum, with its structured approach, offers a clear way of organizing tasks, fostering teamwork, and ensuring that progress is continually monitored. Learning about Scrum is relevant to my future career because it’s widely used in the tech industry, particularly for software development and managing complex projects.

From reading the article, I gained a solid understanding of the core principles that make Scrum effective. The 3 pillars stood out to me, especially Transparency. As a student, I can relate to the importance of transparency in team projects where communication is key to understanding who’s doing what, when, and how. Inspection and Adaptation also made me realize how crucial it is to frequently check our progress and be willing to change course when necessary, which can save a lot of time and effort in the long run.

The 5 Values were a reminder of the importance of collaboration and maintaining a positive, respectful team environment. These values are essential, not just for Scrum but for any professional setting. I particularly appreciated the focus on Courage, which resonated with me as I’m still learning how to approach new and challenging problems in my coursework.

Finally, the 7 Key Principles reinforced the idea of simplicity and the need to avoid overcomplicating solutions, something I’ve noticed in my own work when I get caught up in trying to build complex solutions rather than focusing on what’s truly necessary.

I plan to apply the principles of Scrum, especially the importance of adaptation and simplicity, in my future projects. Whether it’s a group coding project or individual work, Scrum’s emphasis on regular inspection and continuous improvement will help me ensure that I’m always learning and adjusting as I go.

Resource:

“Scrum Mastering the 3 Pillars, 5 Values, and 7 Key Principles of Agile Project Management”

From the blog Computer Science From a Basketball Fan by Brandon Njuguna and used with permission of the author. All other rights reserved by the author.

Software Maintenance

Source: https://www.geeksforgeeks.org/software-engineering-software-maintenance/

This article is titled “Software Maintenance – Software Engineering.” Software maintenance “refers to the process of modifying and updating a software system after it has been delivered to the customer.” There are many different aspects involved in this including: fixing bugs, adding new features, and keeping up with new hardware and software requirements. Maintenance is very important for ensuring that software is able to last long. This process can be expensive and complex, so these factors must be taken into account during the planning of a software development project. The important tasks in regard to software maintenance are: bug fixing, enhancements, performance optimization, porting and migration, re-engineering, and documentation. Summarizing these tasks, it is important to find and fix errors quickly, add new features/improve existing ones, improve the performance of the software, adapt the software to run on different hardware, improve the design, and maintain accurate documentation of all of these processes. There are quite a few different types of software maintenance, but they can be categorized into proactive and reactive types. “Proactive maintenance involves taking preventive measures to avoid problems from occurring, while reactive maintenance involves addressing problems that have already occurred.” Maintenance can be done by stakeholders, the development team, a third-party, and they can be both planned or unplanned. Planned maintenance can be described as regular maintenance (bug fixes) while unplanned maintenance can be described as reactive maintenance that occurs when something unexpected happens. Maintenance can fall into these different categories: corrective maintenance, adaptive maintenance,  perfective maintenance, and preventive maintenance. Corrective refers to fixing bugs and enhancing performance of the system. Adaptive refers to modifications being made when a customer needs the software to run on a different system. Perfective refers to the adaption of the software when a customer has a demand. Lastly, preventive maintenance refers to modifications that focus on the prevention of future issues with the software. Software maintenance is important but there are some things to consider: the cost, complexity, possibility of new bugs, users not updating the software, compatibility, technical debt, and end-of-life (where maintenance isn’t possible anymore or cost-effective).

I chose this article because I found it in the syllabus and thought the topic to be interesting. We are always learning about the development of software, but the idea of maintaining it over the long term isn’t as heavily considered. A large part of the work of a software development team is to obviously develop software but it is also important to learn about how it can maintain a sense of longevity free from error and customer complaints. I will keep the information I learned from this article in mind in future projects and when I’m working with a team to ensure that I’m developing software all the while keeping maintenance in mind. If it is considered during the development process, the maintenance process will be much easier.

From the blog CS@Worcester – Shawn In Tech by Shawn Budzinski and used with permission of the author. All other rights reserved by the author.

What’s an API? A Casual Guide for Noobs

Have you seen those memes about junior developers pushing the API key as a comment but never understood why it is such a big deal? Well, have no fear because if you have no idea what an API is, you’re at the right place.

So, What Even Is an API?

API stands for Application Programming Interface. It sounds super technical, but it’s not that bad. Basically, an API is like a menu at a restaurant. The menu tells you what dishes (functions) the kitchen (the system) can make for you. You don’t need to know how they’re cooking your pasta in the back; you just order, and it shows up at your table.

In the tech world, an API does the same thing. It lets one app talk to another without knowing all the inner details of how the other app works. Cool, right?

Why Do We Need APIs?

Imagine you’re building an app that needs weather data. You could go out, set up weather stations, and measure the weather yourself (good luck with that). OR you could just use a weather API that already collects and shares this data for you. APIs save you a ton of time by letting you use existing tools and data instead of building everything from scratch.

How Does It Work?

Here’s a quick breakdown:

  1. You Make a Request: Your app sends a request to the API. Think of it like sending a text message that says, “Hey, can I get today’s weather for [city name]?”
  2. The API Responds: The API sends back the info you need, usually in a format like JSON (basically a fancy way of organizing data).

That’s it. It’s like texting a really reliable friend who always gives you the answers you need.

Real-Life Examples of APIs

  • Google Maps API: Used by apps to show maps and directions.
  • Twitter API: Lets developers pull tweets or post updates automatically.
  • Spotify API: Allows you to add music to your app or create custom playlists.

Even when you’re signing into a website with Google or Facebook, there’s an API making that happen behind the scenes.

Why Should You Care?

If you’re a CS student like me (or thinking about becoming one), learning to use APIs is a game-changer. It’s how you get your apps to do cool things without reinventing the wheel. Plus, if you ever want to work in software development, knowing how to interact with APIs is a must.

So, next time you hear someone drop “API” in a convo, you can confidently nod and say, “Oh yeah, I’ve used APIs before.” (Fake it ‘til you make it, kings.)

From the blog CS@Worcester – Anairdo's WSU Computer Science Blog by anairdoduri and used with permission of the author. All other rights reserved by the author.

Trying to use Rest API

In this blog post, I’ll share my thoughts on an article I read titled “What is a REST API?” from Cleo’s blog. This article dives into the concept of REST APIs (Representational State Transfer), and after reading it, I feel like I now have a much clearer understanding of how REST APIs work and why they’re so important in modern web development. This topic ties directly into our web development course, where we’re learning about web services and how to connect different systems.

The article explains what REST APIs are and why they are widely used. It starts by explaining the core principles of REST, such as statelessness and resource-based URIs (Uniform Resource Identifiers). In simple terms, REST APIs allow different software systems to communicate over the internet by sending requests (like GET, POST, PUT, DELETE) to a server, where each request is independent and contains all the necessary information to be processed. The article also discusses the scalability and flexibility of REST APIs, which make them a popular choice for building web applications that need to handle a large number of users or integrate with other services.

I chose this article because I’ve heard the term “REST API” thrown around in class and in tech articles, but I never fully understood how they work. As a computer science beginner, I often find myself struggling to grasp concepts like APIs and how they fit into the bigger picture of web development. Since we’re covering APIs and web services in our course, I figured reading a simple, clear article would help me solidify my understanding of this important topic.

After reading the article, I feel much more confident about my understanding of REST APIs. Before, I knew APIs were used to transfer data between different applications, but I didn’t fully understand how REST APIs specifically work. The article’s explanation of statelessness was particularly eye-opening to me. I had no idea that each request in a REST API is self-contained, meaning it doesn’t rely on any prior interactions to be processed. This makes sense when you think about how web applications need to be scalable and efficient—keeping things stateless helps ensure the server isn’t overloaded with unnecessary data.

Another thing I found interesting was the explanation of how RESTful APIs use HTTP methods (like GET and POST) to interact with resources. It made me realize how intuitive and flexible REST is for creating services that can easily be integrated with other software systems. I now feel much more comfortable working with APIs.

I want to explore more advanced topics, like authentication and error handling, which the article briefly touched on. This will help me build more secure and reliable web applications.

Resource:

https://www.cleo.com/blog/blog-knowledge-base-what-is-rest-api

From the blog Computer Science From a Basketball Fan by Brandon Njuguna and used with permission of the author. All other rights reserved by the author.

Introduction to Pattern Designing

Source: https://www.geeksforgeeks.org/introduction-to-pattern-designing/

This article is titled “Introduction to Pattern Designing.” In regards to software development, “pattern designing refers to the application of design patterns, which are reusable and proven solutions to common problems encountered during the design and implementation of software systems.” These reusable design patterns showcase relationships that occur between classes or objects. They are language dependent, so they can be described as an idea that makes code flexible and overall speeds up the process of development. Their purpose is to solve common problems. There are three main kinds of design patterns, creational, structural, and behavioral. “Creational design patterns abstract the instantiation process.” Creational design patterns offer a sense of flexibility in regards to “what gets created, who creates it, how it gets created, and, when.” Knowledge about which concrete class is being used is encapsulated and the way instances of classes are created is hidden. “Structural design patterns are concerned with how classes and objects are composed to form larger structures.” Inheritance is used to create interfaces/implementations. Structural design patterns are good for when you want to make independent class libraries collaborate effectively with one another and offer flexibility regarding object composition. “Behavioral design patterns are concerned with algorithms and the assignment of responsibilities between objects.” Patterns of communication are being described here. Inheritance is used to divide behaviors between classes, object composition is used for behavioral object patterns, and the object patterns encapsulate behaviors in objects. Overall, the benefits of pattern designing are reusable solutions, scalability, and abstraction/communication. The downfall of it however is that there is a learning curve while you try to understand the patterns, there may be concerns with when you should apply the patterns in your code, and if patterns aren’t implemented consistently and in correlation with the advancement of the system, maintenance issues may occur. But regardless, they are a great way to solve common problems during the development process.

I chose this topic because the idea of design patterns was in the syllabus and it interested me. We learned about design patterns such as Factory, Strategy, and Singleton, but reading about the larger terms of creational, structural, and behavioral patterns offered deep insight into the topic. The supposed benefits of common methodologies in software development are always presented but it is also good to know about the downfalls, which I am glad this article showed about the design patterns. When I am working on a team or in the workforce, I will definitely reference these design patterns to improve the maintenance capability and scalability of my code, and do so in a way which I am able to avoid the downfalls of implementing them incorrectly. 

From the blog CS@Worcester – Shawn In Tech by Shawn Budzinski and used with permission of the author. All other rights reserved by the author.

Agile and its Shortcomings

https://www.codingame.com/blog/agile-failed-peek-future-programming

This blog post by CodinGame provides a short history of development methodologies and goes on to make a critique of specifically Agile. It describes how, despite how widespread the methodology has now become, Agile has generally not succeeded as a methodology because of how it has been implemented by corporate management teams. While Agile as a methodology strives to be a set of principles that should guide a team to good practice and a healthy work environment, non-programmers use it as a tool to enforce hierarchical structures and rigid development. Most of what is said can likely also be applied to Scrum, but it is not explicitly mentioned.

This blog interested me because when I learned about Agile and Scrum, I always thought to myself, “Why would you ever not choose these methodologies? These seem far superior to outdated methods like Waterfall.” However, this post opened my eyes to how Agile really only works when implemented as was expected by those who wrote the manifesto. This post makes it very clear that what makes a methodology successful, or a team successful in general, is understanding its intent and being able to reflect on if its intent aligns with the work style of the team in question. Generally, I feel that if you’re a business leader who wants to have a rigid plan, then you should just follow a rigid plan like that of Waterfall, rather than creating a fake team experience with a smoke-and-mirrors version of Agile.

The post helped me reflect on what to look for in a well-functioning team. I think these insights can be very valuable for someone when they are looking for a place of work, as when you apply these critiques as a tool to analyze employers, it may be very apparent at some point in the process when a team is run by a group of developers and when a team is run by non-programmers enforcing a strict hierarchical system of development. I think this kind of resource would also be useful if ever in a position where one’s input is valued when evaluating how a team should handle itself, as it can be helpful in recognizing what are good tendencies in a team and what are bad tendencies, especially in a leadership position where hearing the whole team’s voice can be valuable. Being able to express why a decision may be bad is not only valuable for working in a team but also for working under management, as articulated thoughts may be enough to have an impact on their perspective as well.

This blog highlights the importance of understanding and respecting the intent behind methodologies like Agile; it serves as a notice of how we need to hold ourselves and team leaders accountable for how a team chooses to go about development.

From the blog CS@Worcester – CS ZStomski by Zachary Stomski and used with permission of the author. All other rights reserved by the author.

Microservices and Their Importance

The blog “Microservices Architecture for Enterprises,” authored by Charith Tangirala, provides a detailed summary of a talk by Darren Smith, a General Manager at ThoughtWorks Sydney, on implementing microservices architecture in large enterprises. The blog explores key considerations for transitioning from monolithic systems to microservices, including cultural, technical, and operational changes. It highlights critical topics such as dependency management, governance, infrastructure automation, deployment coupling, and the evolving role of architects in fostering collaboration between technical and business teams. By sharing practical insights, the blog offers a framework to assess whether microservices are suitable for specific organizational goals and challenges.

I picked this blog because we are currently learning about microservices and APIs in class. I wanted to explore how the concepts we are studying are applied in real-world scenarios and understand their practical importance. This blog stood out because it connects the theoretical foundation of microservices with the challenges and solutions encountered in large enterprises. By studying this resource, I aimed to gain insights into why microservices are a valuable architectural choice and what factors should be considered when implementing them.

One key takeaway from the blog was the explanation of “deployment coupling.” It was interesting to learn how monolithic systems often require synchronized deployments, while microservices, through the use of REST APIs, allow for independent service releases. This flexibility is one of the main benefits of microservices. At the same time, the blog points out operational challenges, such as the complexity of monitoring and managing numerous services, which requires a strong DevOps infrastructure. It reminded me that while microservices can provide agility, they also demand careful planning and strong operational practices.

Another important point was the emphasis on organizational culture. The blog highlights how the success of microservices depends on cross-team cooperation, education, and alignment. This reinforces the idea that architecture isn’t just about technology—it’s about how people and teams work together. It made me realize that adopting microservices is as much about communication and collaboration as it is about code. I’ve also learned that companies like Netflix and Amazon have already implemented microservices architecture, leading to significant success with their products. This real-world application of microservices by industry leaders shows how impactful this approach can be when implemented effectively, further inspiring me to learn more and apply these principles in future projects

As I move forward, I want to keep learning more about microservices and APIs and use what I’ve learned in future projects. My goal is to apply these concepts to real-world problems and build systems that are flexible and efficient. I also hope to use this knowledge in my future career as a Software Developer, where I can create scalable and innovative solutions.

Sources:

Microservices Architecture for Enterprises

Citation:

Tangirala, Charith. “Microservices Architecture for Enterprises.” http://www.thoughtworks.com, 13 July 2015, http://www.thoughtworks.com/insights/blog/microservices-architecture-for-enterprises. Accessed 23 Nov. 2024.

From the blog CS@Worcester – CodedBear by donna abayon and used with permission of the author. All other rights reserved by the author.

Optimizing Docker in the Cloud

After our recent studies relating to Docker managing dependencies and ensuring consistent development environments, I was interested in learning more about how to use it because I thought something like this could have saved me many hours of troubleshooting while completing a recent research project. This article, written by Medium user Dispanshu, highlights the capabilities of Docker and how to efficiently use the service in a cloud environment.  

The article focuses on optimizing Docker images to achieve high-performance, low-cost deployment. The problem some developers run into is having very large images which slow the build processes, waste storage, and reduce application speeds. I learned from this work that large images result from including unnecessary tools/dependencies/files, inefficient layer caching, and including other full images (like Python in this case). Dispanshu focuses on achieving the solution in 5 parts: 

  1. Multi-stage builds 
  1. Layer optimizations 
  1. Minimal base images (including Scratch) 
  1. Advanced techniques like distroless images 
  1. Security best practices 

Using these techniques, the image receives a size reduction from 1.2GB to 8MB! The most impactful change being multi-stage builds to which the writer accredits over 90% of this size reduction. I have never used these techniques before, but my interest definitely peaked when I saw the massive size reduction that resulted from these changes.  

The multi-stage builds technique references the build stage and the production stage. By using this technique build-time dependencies are separated from the actual runtime environment which avoids the inclusion of any unnecessary files or tools in the resulting image. Another technique recommends minimal base images using the slim or alpine version (for Python) over the full version for the build stage and for production stage it is recommended to use the scratch base image (no OS, no dependencies, no data or apps). Using a scratch image has pros and cons, but when we are considering image sizes and optimization this is an ideal route. 

Another interesting piece of this article is the information relating to advanced techniques like distroless images, using Docker Buildkit, using .dockerignore file, and eliminating any excess files. The way that distroless images are explained by the writer makes the concept and the use case very clear. The high-level differences between the Full Distribution Image, the Scratch Image, and the Distroless Image are described as the different ways we can pack for a trip:  

  1. Pack your entire wardrobe (Full Distribution Image) 
  1. Pack nothing and buy everything at your destination (Scratch Image) 
  1. Pack only what you’ll actually need (Distroless Image) 

The analogy makes understanding the relationship between these three image options seemingly obvious, but I can imagine applying any of these techniques described would require some perseverance. This article describes an architecture that juggles simplicity, performance, cost, and security with very impactful results. The results of this article are proof of the value these techniques can provide, and I will be seeking to apply them in my future work.

From the blog CS@Worcester by cameronbaron and used with permission of the author. All other rights reserved by the author.

Exploring OpenAPI with YAML

A blog from Amir Lavasani caught my attention this week as it perfectly aligns with topics we are currently focusing on in our course like working with OpenAPI Specification (OAS) using YAML. Working with REST APIs is relatively new for me and fully comprehending how these requests and interactions work is still in progress.  Lavasani structures this post as a tutorial for building a blog posts API, but there is a surplus of background information and knowledge provided for each step. Please consider reading this blog post and giving the tutorial a try!  

This tutorial starts as we all should with planning out anticipated endpoints and methods, understanding the schema of JSON objects that will be used, and acknowledging project requirements. This step, though simple, is helpful to remind us that it is vital to plan to ensure a clean and concise structure when we have implemented our work. Moving into the OAS architecture, Lavasani simplifies this into The Metadata Section, The Info Object, The Servers Object, and The Paths Object (and all objects that fall within). Authentication is touched on briefly, but the writer provides links to outside sources to learn more. For me, this post reiterates all the OpenAPI work we have completed in class with another example of project application and provides additional resources to learn from and work with.   

Most of the basics that this post touches on we have already reviewed in our course, the writer provides valuable information related to the minor details. A hint that was useful to me was the ability to use Markdown syntax in the description fields in OAS – this can ensure ease of use and understanding of the API. I also learned how the full URL is constructed based on the paths object. We know our paths are based on the endpoints we define and the operations (HTTP methods) they support, but to make sense of it all these pieces of information are concatenated with the URL from the servers to create the full URL. This is somewhat obvious, but seeing it spelled out as Lavasani does is very useful to fully reinforce what we know about these interactions. Another new piece of knowledge relates to the parameter object. I was not initially aware of all of the ways we can set operation parameters. We know how to use parameters via the URL path or the query string, but we can also use headers for this purpose or cookies which is useful to know for future implementations. Lastly, the writer mentions the OpenAPI Generator which can generate client-side and server-side code from an OAS – though I am not familiar with this service I can see it’s practicality and I will likely try to complete this tutorial and follow up with learning about the outside tools mentioned.   

This blog provides a practical example of working with the OpenAPI Specification, reinforcing concepts we’ve studied in class while providing useful insights. 

From the blog CS@Worcester by cameronbaron and used with permission of the author. All other rights reserved by the author.