Category Archives: CS-343

Understanding UML: A Simple Guide to the Unified Modeling Language

In 1997, the Object Management Group (OMG) introduced the Unified Modeling Language (UML). It was created to help IT professionals design and communicate software systems more easily. Think of it like blueprints for a building UML gives developers a standard way to plan and share their ideas about how a system should work.

UML has become a popular tool in the tech world. You’ll often see it listed on resumes, but many people don’t actually know how to use it well. That’s why learning the basics of UML is important if you want to include it in your skillset. In this guide, we’ll cover the article written by Donald Bell who works as a solutions architect for IBM, and some of the most common diagrams and how they’re used.

What Makes UML Special?

UML is not tied to a specific programming language. This makes it flexible and easy to use in many different environments, whether you’re working with Java, .NET, or something else. Also, UML is a language, not a method. This means it can fit into any company’s way of working without requiring big changes.

The main purpose of UML is to help teams understand and share their ideas more clearly. By using UML diagrams, teams can communicate how a system will work, making it easier for new members to join a project and get up to speed quickly.

Key Types of UML Diagrams

Use-Case Diagrams: These show how users (called “actors”) interact with the system. For example, they can illustrate how a customer logs into an app or makes a purchase. Use-case diagrams are simple and focus on the system’s main functions.

Activity Diagrams: These diagrams show the flow of actions in a process. They’re great for mapping out workflows, like how a customer service ticket moves from “open” to “resolved.” Activity diagrams are easy to understand, even for people who don’t have a technical background.

Deployment Diagrams: These focus on where parts of the system will run, like servers or applications. They show how different pieces of the system communicate and help teams plan how everything will work in real life.

Why UML Still Matters

UML has been around for over 25 years, but it’s still widely used because its core ideas are timeless. Much like classic software books that are still relevant today, UML helps solve problems that developers face every day.

Even without fancy tools, you can start using UML with just a whiteboard or pen and paper. By practicing with basic diagrams, you’ll improve how you share your ideas and work with others on software projects. Keep learning, and UML can become one of your most useful tools!

Reference

https://developer.ibm.com/articles/an-introduction-to-uml/

From the blog CS@Worcester – The Bits & Bytes Universe by skarkonan and used with permission of the author. All other rights reserved by the author.

SOLID Principles Made Easy

As my year progressed I wanted to better understand the basic principles of software design and sought out the SOLID principles after hearing about them throughout college. I had not yet put the time into properly learning and practicing them and came across a blog putting the principles into text simply and with excellent examples. The article’s title is The SOLID Principles of Object-Oriented Programming Explained in Plain English By Yiğit Kemal Erinç

The article begins by identifying what the SOLID principles are and their importance. These principles are five key guidelines for object-oriented design that help developers create maintainable, flexible, and understandable code. These principles are provided in great detail, with several coding examples and potential drawbacks or dangerous pitfalls. They are the:

  1. Single Responsibility Principle (SRP): A class should have one responsibility and one reason to change. By adhering to the SRP, you minimize complexity, avoid conflicting changes, and simplify version control and collaboration.
  2. Open-Closed Principle (OCP): Classes should be open for extension but closed for modification. This means new functionality can be added without altering existing code. This is often done by using interfaces or abstract classes. 
  3. Liskov Substitution Principle (LSP): Subtypes must be substitutable for their base types without altering the correctness of the program. 
  4. Interface Segregation Principle (ISP): Clients should not be forced to depend on interfaces they don’t use. This principle advocates for creating small, specific interfaces rather than large, general-purpose ones. It ensures that classes only implement methods relevant to their functionality, and there is less bloat and redundancy in the code.
  5. Dependency Inversion Principle (DIP): High-level modules should not depend on low-level modules, but both should depend on abstractions. This means This promotes flexibility and low coupling, making systems easier to extend and modify.

The key takeaway presented in the conclusion is that by following SOLID principles, developers can write cleaner, more testable, and scalable code.

The SOLID principles are essential and key learning blocks for building efficient code. This is a topic that has been addressed in class as design smells and patterns. By learning this through the lens of SOLID I was able to reconcile the importance of that topic in building good code. Through the CS-343 course, I was looking to understand those design patterns better, and by understanding them as SOLID principles I am able to better grasp them. I will be prepared to answer questions regarding SOLID principles in context thanks to this article in combination with the class material.

Source:https://www.freecodecamp.org/news/solid-principles-explained-in-plain-english/

From the blog CS@Worcester – WSU CS Blog: Ben Gelineau by Ben Gelineau and used with permission of the author. All other rights reserved by the author.

The Hidden Backbone of Software Development: Software Documentation

Intro

In software development, documentation often takes a backseat to coding, testing, and deploying. However, software documentation is the backbone of more easily maintainable, scalable, and collaborative projects. This blog post by David Oragui gives useful information on why documentation is essential and how it supports both developers and end-users throughout the software lifecycle.

Summary of Source

The blog post explores the role of software documentation and offers practical advice on creating and maintaining it effectively. The main sections are:

  1. What is Software Documentation?: A definition and explanation of how it serves as a guide for developers, users, and stakeholders, providing clarity on a system’s functionality and usage.
  2. Types of Documentation: A breakdown of key categories, including user documentation, developer guides, technical documentation, and process documentation.
  3. Best Practices for Writing Documentation: Practical tips, such as structuring content logically, using plain language, and keeping documentation up-to-date.
  4. Using Software Documentation Tools: The different type of documentation tools and the reasons to consider them including automation, collaboration, and accessibility

Why I Chose This Blog

I selected this blog because it is a concise resource that explores all there is to know about documentation, making it a great guide to refer back to when needed. In my coursework, the focus has largely been on coding, but I’ve noticed that a lack of proper documentation can make even the best-written software hard to use and maintain. This blog stood out for its clear and actionable advice, which is especially valuable as I create projects and prepare for internships.

Reflection

The blog’s structured approach to explaining software documentation makes it great as an introductory resource. One section that particularly stood out was the breakdown of the different documentation types. It clarified the different audiences for documentation end-users, developers, and stakeholders, and how each requires tailored content. For example, user documentation should be simple and accessible, while developer guides need to be more technical and detail-oriented.

This difference in target audiences was an eye-opener, even though it does seem obvious when it’s said. There’s no reason to have documentation about technical details that an end-user will see because they won’t understand it or even need to. Whenever I thought about software documentation before, it was just a one size fits all document that explained the technical details of the software.

Another valuable takeaway was the emphasis on keeping documentation up to date. It made me consider the consequences of never updating documentation and having incorrect information that could end up causing a lot of trouble. When the point of documentation is to make it easier for all parties to understand a project, outdated docs would contribute to the very problem it’s trying to solve.

Future Application

Moving forward, I plan to apply these best practices to my projects. I will make sure to make different documentation resources for different target audiences, also making sure to update the docs every time a change is made to make sure there is no confusion with outdated information. Software Documentation simply makes the whole process of understanding and maintaining code a lot easier so there is really no downside to adding quality documentation to a project.

Citation

Software Documentation Best Practices [With Examples] by David Oragui

https://helpjuice.com/blog/software-documentation

From the blog CS@Worcester – The Science of Computation by Adam Jacher and used with permission of the author. All other rights reserved by the author.

Semantic Versioning or Updating Numbers?

Versioning is an important role in development, as we know it offers a clear framework for tracking changes and updates to code. One system that has become widely adopted is semantic versioning, which provides a standardized approach to naming software releases. For this blog entry, I chose to review the AWS blog post on semantic versioning. This resource offers a detailed exploration of how Semantic Ver. simplifies release management and how it ensures consistent communication among developers, operations teams, and users.

The Article

The article introduces semantic versioning as a three-part versioning system—MAJOR.MINOR.PATCH (e.g., 2.3.1), this is where each segment is specific to types of changes:

  • MAJOR: Incremented when incompatible API changes occur.
  • MINOR: Updated when new features are added in a backward-compatible way.
  • PATCH: Increased when bug fixes or minor changes that don’t affect the API are implemented.

The blog highlights how this system helps manage software lifecycles by making the scope of changes transparent, which supports with better collaboration between teams. It also emphasizes the importance of tagging releases in version control systems like Git to align codebase changes with their respective version numbers.

The article also showcases real-world applications of semantic versioning in continuous integration/continuous delivery (CI/CD) pipelines. For example, tools like AWS CodePipeline and CodeDeploy can use version tags to automate deployment processes, ensuring consistency and reducing the likelihood of errors.

Why I Chose This Resource

In class, we learned about the importance of versioning in software projects and how clear communication of changes can prevent confusion, particularly in team environments. Semantic versioning was introduced as a standard approach, but I wanted to explore its real-world applications further. This AWS blog post stood out because it not only explained SemVer but also demonstrated its practical use in release management and CI/CD pipelines, areas that wasn’t relevant to the current coursework.

Personal Reflections

The resource clarified how semantic versioning improves team collaboration by setting clear expectations about software updates. For instance, knowing that a MINOR update is backward-compatible or that a MAJOR update might require significant adjustments removes ambiguity for developers and users.

One aspect that particularly resonated with me was the integration of semantic versioning into CI/CD workflows. The article’s example of automating deployments based on version tags helped me understand how practices can streamline release management, reducing manual errors and accelerating delivery timelines. I had not considered how tools like AWS CodePipeline could interact with semantic versioning to achieve this level of efficiency…

Future Practice(s)

I plan to adopt semantic versioning in all future projects, starting with any academic group work. For example, using version tags in Git will help my team better manage changes and understand the implications of updates. Additionally, I want to experiment with automating deployments using CI/CD tools like GitHub Actions or AWS CodePipeline, as the article suggests.

Whether contributing to open-source projects or collaborating in a work environment, semantic versioning will help me communicate changes clearly and maintain quality and control.

Conclusion

The AWS blog post brought up the importance of semantic versioning as a tool for simplifying release management and fostering collaboration. I think this not only deepened an understanding, but also inspired me to integrate these practices into my current and future workflows. Semantic versioning is more than just a numbering system I think it’s a critical framework for ensuring clarity, consistency, and efficiency in software development. Thank you for reading my blog!

https://aws.amazon.com/blogs/devops/using-semantic-versioning-to-simplify-release-management

https://www.geeksforgeeks.org/introduction-semantic-versioning/

From the blog CS@Worcester – function & form by Nathan Bui and used with permission of the author. All other rights reserved by the author.

Software Design Principles

For this week, I decided to find a blog about Software design principles to refresh my mind on the topic. I found a blog called “Main Software Design Principles Every Developer Should Know” by Amr Saafan. The reason why I decided to discuss this blog Specifically is because it breaks down each section in short simple bullet points.  

The first section is Object-Oriented Programming (OOP) principles. This section explains principles like Encapsulation which guarantees that objects securely manage their data and behavior while concealing their complexity and granting limited access. Next is abstraction- by concealing details and concentrating on key elements, abstraction makes problem-solving easier and allows for solutions that are extendable and reusable. Inheritance encourages code reuse by enabling classes to share properties and functions, whereas polymorphism enables flexible behavior depending on object types, increasing flexibility and minimizing code duplication. I remembered these principles the most, possibly because I’ve had plenty of discussions about it, however, I found the explanation of encapsulation very helpful.

Software design is further improved by the SOLID principles. In order to improve testability and maintenance, the Single Responsibility Principle (SRP) promotes modular classes with distinct responsibilities. In order to maintain stability, the Open/Closed Principle (OCP) advises adding functionality through new code as opposed to changing current code. The Interface Segregation Principle (ISP) promotes short, targeted interfaces, minimizing needless dependencies, whereas the Liskov Substitution Principle (LSP) preserves compatibility between base and derived classes. The Dependency Inversion Principle (DIP), which prioritizes abstractions over concrete implementations, encourages loose coupling. I needed a refresher of LSP and ISP the most out of this section and I found the explanations to be helpful. 

The next section (Don’t Repeat Yourself (DRY)), explains how it is useful to eliminate redundancy, which promotes simpler, maintainable, and error-resistant code through modularity and reuse. (Keep It Simple, Stupid  (KISS)) promotes straightforward, effective solutions that are simpler to comprehend and maintain, while discouraging overengineering. The You Aren’t Gonna Need It (YAGNI) principle suggests avoiding unneeded features in advance, simplifying things, and concentrating on urgent needs. Those principles felt self explanatory to me but they were explained well. The Law of Demeter (LoD) improves modularity and decreases coupling by restricting an object’s interactions to its immediate dependents. I definitely needed the explanation of this principle. I found it helpful to have the list of how to apply it which included items like Limit Direct Dependencies, Use Interfaces and Avoid Chain Calls. It aided in my understanding of the principle.

Next, the Composition Over Inheritance principle suggests building systems with reusable components rather than deep inheritance hierarchies, improving flexibility and modularity. Finally, the Principle of Least Astonishment (POLA) ensures software behaves as expected, minimizing confusion by being clear, consistent, and intuitive.

As a software developer, these design principles will help design scalable, flexible, and user-friendly systems. KISS and YAGNI simplify designs to avoid unnecessary complexity, while composition and encapsulation add flexibility. Following POLA makes the system easier to understand and use for both developers and users.

From the blog CS@Worcester – Live Laugh Code by Shamarah Ramirez and used with permission of the author. All other rights reserved by the author.

What’s an API? A Casual Guide for Noobs

Have you seen those memes about junior developers pushing the API key as a comment but never understood why it is such a big deal? Well, have no fear because if you have no idea what an API is, you’re at the right place.

So, What Even Is an API?

API stands for Application Programming Interface. It sounds super technical, but it’s not that bad. Basically, an API is like a menu at a restaurant. The menu tells you what dishes (functions) the kitchen (the system) can make for you. You don’t need to know how they’re cooking your pasta in the back; you just order, and it shows up at your table.

In the tech world, an API does the same thing. It lets one app talk to another without knowing all the inner details of how the other app works. Cool, right?

Why Do We Need APIs?

Imagine you’re building an app that needs weather data. You could go out, set up weather stations, and measure the weather yourself (good luck with that). OR you could just use a weather API that already collects and shares this data for you. APIs save you a ton of time by letting you use existing tools and data instead of building everything from scratch.

How Does It Work?

Here’s a quick breakdown:

  1. You Make a Request: Your app sends a request to the API. Think of it like sending a text message that says, “Hey, can I get today’s weather for [city name]?”
  2. The API Responds: The API sends back the info you need, usually in a format like JSON (basically a fancy way of organizing data).

That’s it. It’s like texting a really reliable friend who always gives you the answers you need.

Real-Life Examples of APIs

  • Google Maps API: Used by apps to show maps and directions.
  • Twitter API: Lets developers pull tweets or post updates automatically.
  • Spotify API: Allows you to add music to your app or create custom playlists.

Even when you’re signing into a website with Google or Facebook, there’s an API making that happen behind the scenes.

Why Should You Care?

If you’re a CS student like me (or thinking about becoming one), learning to use APIs is a game-changer. It’s how you get your apps to do cool things without reinventing the wheel. Plus, if you ever want to work in software development, knowing how to interact with APIs is a must.

So, next time you hear someone drop “API” in a convo, you can confidently nod and say, “Oh yeah, I’ve used APIs before.” (Fake it ‘til you make it, kings.)

From the blog CS@Worcester – Anairdo's WSU Computer Science Blog by anairdoduri and used with permission of the author. All other rights reserved by the author.

Trying to use Rest API

In this blog post, I’ll share my thoughts on an article I read titled “What is a REST API?” from Cleo’s blog. This article dives into the concept of REST APIs (Representational State Transfer), and after reading it, I feel like I now have a much clearer understanding of how REST APIs work and why they’re so important in modern web development. This topic ties directly into our web development course, where we’re learning about web services and how to connect different systems.

The article explains what REST APIs are and why they are widely used. It starts by explaining the core principles of REST, such as statelessness and resource-based URIs (Uniform Resource Identifiers). In simple terms, REST APIs allow different software systems to communicate over the internet by sending requests (like GET, POST, PUT, DELETE) to a server, where each request is independent and contains all the necessary information to be processed. The article also discusses the scalability and flexibility of REST APIs, which make them a popular choice for building web applications that need to handle a large number of users or integrate with other services.

I chose this article because I’ve heard the term “REST API” thrown around in class and in tech articles, but I never fully understood how they work. As a computer science beginner, I often find myself struggling to grasp concepts like APIs and how they fit into the bigger picture of web development. Since we’re covering APIs and web services in our course, I figured reading a simple, clear article would help me solidify my understanding of this important topic.

After reading the article, I feel much more confident about my understanding of REST APIs. Before, I knew APIs were used to transfer data between different applications, but I didn’t fully understand how REST APIs specifically work. The article’s explanation of statelessness was particularly eye-opening to me. I had no idea that each request in a REST API is self-contained, meaning it doesn’t rely on any prior interactions to be processed. This makes sense when you think about how web applications need to be scalable and efficient—keeping things stateless helps ensure the server isn’t overloaded with unnecessary data.

Another thing I found interesting was the explanation of how RESTful APIs use HTTP methods (like GET and POST) to interact with resources. It made me realize how intuitive and flexible REST is for creating services that can easily be integrated with other software systems. I now feel much more comfortable working with APIs.

I want to explore more advanced topics, like authentication and error handling, which the article briefly touched on. This will help me build more secure and reliable web applications.

Resource:

https://www.cleo.com/blog/blog-knowledge-base-what-is-rest-api

From the blog Computer Science From a Basketball Fan by Brandon Njuguna and used with permission of the author. All other rights reserved by the author.

Introduction to Pattern Designing

Source: https://www.geeksforgeeks.org/introduction-to-pattern-designing/

This article is titled “Introduction to Pattern Designing.” In regards to software development, “pattern designing refers to the application of design patterns, which are reusable and proven solutions to common problems encountered during the design and implementation of software systems.” These reusable design patterns showcase relationships that occur between classes or objects. They are language dependent, so they can be described as an idea that makes code flexible and overall speeds up the process of development. Their purpose is to solve common problems. There are three main kinds of design patterns, creational, structural, and behavioral. “Creational design patterns abstract the instantiation process.” Creational design patterns offer a sense of flexibility in regards to “what gets created, who creates it, how it gets created, and, when.” Knowledge about which concrete class is being used is encapsulated and the way instances of classes are created is hidden. “Structural design patterns are concerned with how classes and objects are composed to form larger structures.” Inheritance is used to create interfaces/implementations. Structural design patterns are good for when you want to make independent class libraries collaborate effectively with one another and offer flexibility regarding object composition. “Behavioral design patterns are concerned with algorithms and the assignment of responsibilities between objects.” Patterns of communication are being described here. Inheritance is used to divide behaviors between classes, object composition is used for behavioral object patterns, and the object patterns encapsulate behaviors in objects. Overall, the benefits of pattern designing are reusable solutions, scalability, and abstraction/communication. The downfall of it however is that there is a learning curve while you try to understand the patterns, there may be concerns with when you should apply the patterns in your code, and if patterns aren’t implemented consistently and in correlation with the advancement of the system, maintenance issues may occur. But regardless, they are a great way to solve common problems during the development process.

I chose this topic because the idea of design patterns was in the syllabus and it interested me. We learned about design patterns such as Factory, Strategy, and Singleton, but reading about the larger terms of creational, structural, and behavioral patterns offered deep insight into the topic. The supposed benefits of common methodologies in software development are always presented but it is also good to know about the downfalls, which I am glad this article showed about the design patterns. When I am working on a team or in the workforce, I will definitely reference these design patterns to improve the maintenance capability and scalability of my code, and do so in a way which I am able to avoid the downfalls of implementing them incorrectly. 

From the blog CS@Worcester – Shawn In Tech by Shawn Budzinski and used with permission of the author. All other rights reserved by the author.

Microservices and Their Importance

The blog “Microservices Architecture for Enterprises,” authored by Charith Tangirala, provides a detailed summary of a talk by Darren Smith, a General Manager at ThoughtWorks Sydney, on implementing microservices architecture in large enterprises. The blog explores key considerations for transitioning from monolithic systems to microservices, including cultural, technical, and operational changes. It highlights critical topics such as dependency management, governance, infrastructure automation, deployment coupling, and the evolving role of architects in fostering collaboration between technical and business teams. By sharing practical insights, the blog offers a framework to assess whether microservices are suitable for specific organizational goals and challenges.

I picked this blog because we are currently learning about microservices and APIs in class. I wanted to explore how the concepts we are studying are applied in real-world scenarios and understand their practical importance. This blog stood out because it connects the theoretical foundation of microservices with the challenges and solutions encountered in large enterprises. By studying this resource, I aimed to gain insights into why microservices are a valuable architectural choice and what factors should be considered when implementing them.

One key takeaway from the blog was the explanation of “deployment coupling.” It was interesting to learn how monolithic systems often require synchronized deployments, while microservices, through the use of REST APIs, allow for independent service releases. This flexibility is one of the main benefits of microservices. At the same time, the blog points out operational challenges, such as the complexity of monitoring and managing numerous services, which requires a strong DevOps infrastructure. It reminded me that while microservices can provide agility, they also demand careful planning and strong operational practices.

Another important point was the emphasis on organizational culture. The blog highlights how the success of microservices depends on cross-team cooperation, education, and alignment. This reinforces the idea that architecture isn’t just about technology—it’s about how people and teams work together. It made me realize that adopting microservices is as much about communication and collaboration as it is about code. I’ve also learned that companies like Netflix and Amazon have already implemented microservices architecture, leading to significant success with their products. This real-world application of microservices by industry leaders shows how impactful this approach can be when implemented effectively, further inspiring me to learn more and apply these principles in future projects

As I move forward, I want to keep learning more about microservices and APIs and use what I’ve learned in future projects. My goal is to apply these concepts to real-world problems and build systems that are flexible and efficient. I also hope to use this knowledge in my future career as a Software Developer, where I can create scalable and innovative solutions.

Sources:

Microservices Architecture for Enterprises

Citation:

Tangirala, Charith. “Microservices Architecture for Enterprises.” http://www.thoughtworks.com, 13 July 2015, http://www.thoughtworks.com/insights/blog/microservices-architecture-for-enterprises. Accessed 23 Nov. 2024.

From the blog CS@Worcester – CodedBear by donna abayon and used with permission of the author. All other rights reserved by the author.

Exploring OpenAPI with YAML

A blog from Amir Lavasani caught my attention this week as it perfectly aligns with topics we are currently focusing on in our course like working with OpenAPI Specification (OAS) using YAML. Working with REST APIs is relatively new for me and fully comprehending how these requests and interactions work is still in progress.  Lavasani structures this post as a tutorial for building a blog posts API, but there is a surplus of background information and knowledge provided for each step. Please consider reading this blog post and giving the tutorial a try!  

This tutorial starts as we all should with planning out anticipated endpoints and methods, understanding the schema of JSON objects that will be used, and acknowledging project requirements. This step, though simple, is helpful to remind us that it is vital to plan to ensure a clean and concise structure when we have implemented our work. Moving into the OAS architecture, Lavasani simplifies this into The Metadata Section, The Info Object, The Servers Object, and The Paths Object (and all objects that fall within). Authentication is touched on briefly, but the writer provides links to outside sources to learn more. For me, this post reiterates all the OpenAPI work we have completed in class with another example of project application and provides additional resources to learn from and work with.   

Most of the basics that this post touches on we have already reviewed in our course, the writer provides valuable information related to the minor details. A hint that was useful to me was the ability to use Markdown syntax in the description fields in OAS – this can ensure ease of use and understanding of the API. I also learned how the full URL is constructed based on the paths object. We know our paths are based on the endpoints we define and the operations (HTTP methods) they support, but to make sense of it all these pieces of information are concatenated with the URL from the servers to create the full URL. This is somewhat obvious, but seeing it spelled out as Lavasani does is very useful to fully reinforce what we know about these interactions. Another new piece of knowledge relates to the parameter object. I was not initially aware of all of the ways we can set operation parameters. We know how to use parameters via the URL path or the query string, but we can also use headers for this purpose or cookies which is useful to know for future implementations. Lastly, the writer mentions the OpenAPI Generator which can generate client-side and server-side code from an OAS – though I am not familiar with this service I can see it’s practicality and I will likely try to complete this tutorial and follow up with learning about the outside tools mentioned.   

This blog provides a practical example of working with the OpenAPI Specification, reinforcing concepts we’ve studied in class while providing useful insights. 

From the blog CS@Worcester by cameronbaron and used with permission of the author. All other rights reserved by the author.