Category Archives: CS@Worcester

Trying to use Rest API

In this blog post, I’ll share my thoughts on an article I read titled “What is a REST API?” from Cleo’s blog. This article dives into the concept of REST APIs (Representational State Transfer), and after reading it, I feel like I now have a much clearer understanding of how REST APIs work and why they’re so important in modern web development. This topic ties directly into our web development course, where we’re learning about web services and how to connect different systems.

The article explains what REST APIs are and why they are widely used. It starts by explaining the core principles of REST, such as statelessness and resource-based URIs (Uniform Resource Identifiers). In simple terms, REST APIs allow different software systems to communicate over the internet by sending requests (like GET, POST, PUT, DELETE) to a server, where each request is independent and contains all the necessary information to be processed. The article also discusses the scalability and flexibility of REST APIs, which make them a popular choice for building web applications that need to handle a large number of users or integrate with other services.

I chose this article because I’ve heard the term “REST API” thrown around in class and in tech articles, but I never fully understood how they work. As a computer science beginner, I often find myself struggling to grasp concepts like APIs and how they fit into the bigger picture of web development. Since we’re covering APIs and web services in our course, I figured reading a simple, clear article would help me solidify my understanding of this important topic.

After reading the article, I feel much more confident about my understanding of REST APIs. Before, I knew APIs were used to transfer data between different applications, but I didn’t fully understand how REST APIs specifically work. The article’s explanation of statelessness was particularly eye-opening to me. I had no idea that each request in a REST API is self-contained, meaning it doesn’t rely on any prior interactions to be processed. This makes sense when you think about how web applications need to be scalable and efficient—keeping things stateless helps ensure the server isn’t overloaded with unnecessary data.

Another thing I found interesting was the explanation of how RESTful APIs use HTTP methods (like GET and POST) to interact with resources. It made me realize how intuitive and flexible REST is for creating services that can easily be integrated with other software systems. I now feel much more comfortable working with APIs.

I want to explore more advanced topics, like authentication and error handling, which the article briefly touched on. This will help me build more secure and reliable web applications.

Resource:

https://www.cleo.com/blog/blog-knowledge-base-what-is-rest-api

From the blog Computer Science From a Basketball Fan by Brandon Njuguna and used with permission of the author. All other rights reserved by the author.

Introduction to Pattern Designing

Source: https://www.geeksforgeeks.org/introduction-to-pattern-designing/

This article is titled “Introduction to Pattern Designing.” In regards to software development, “pattern designing refers to the application of design patterns, which are reusable and proven solutions to common problems encountered during the design and implementation of software systems.” These reusable design patterns showcase relationships that occur between classes or objects. They are language dependent, so they can be described as an idea that makes code flexible and overall speeds up the process of development. Their purpose is to solve common problems. There are three main kinds of design patterns, creational, structural, and behavioral. “Creational design patterns abstract the instantiation process.” Creational design patterns offer a sense of flexibility in regards to “what gets created, who creates it, how it gets created, and, when.” Knowledge about which concrete class is being used is encapsulated and the way instances of classes are created is hidden. “Structural design patterns are concerned with how classes and objects are composed to form larger structures.” Inheritance is used to create interfaces/implementations. Structural design patterns are good for when you want to make independent class libraries collaborate effectively with one another and offer flexibility regarding object composition. “Behavioral design patterns are concerned with algorithms and the assignment of responsibilities between objects.” Patterns of communication are being described here. Inheritance is used to divide behaviors between classes, object composition is used for behavioral object patterns, and the object patterns encapsulate behaviors in objects. Overall, the benefits of pattern designing are reusable solutions, scalability, and abstraction/communication. The downfall of it however is that there is a learning curve while you try to understand the patterns, there may be concerns with when you should apply the patterns in your code, and if patterns aren’t implemented consistently and in correlation with the advancement of the system, maintenance issues may occur. But regardless, they are a great way to solve common problems during the development process.

I chose this topic because the idea of design patterns was in the syllabus and it interested me. We learned about design patterns such as Factory, Strategy, and Singleton, but reading about the larger terms of creational, structural, and behavioral patterns offered deep insight into the topic. The supposed benefits of common methodologies in software development are always presented but it is also good to know about the downfalls, which I am glad this article showed about the design patterns. When I am working on a team or in the workforce, I will definitely reference these design patterns to improve the maintenance capability and scalability of my code, and do so in a way which I am able to avoid the downfalls of implementing them incorrectly. 

From the blog CS@Worcester – Shawn In Tech by Shawn Budzinski and used with permission of the author. All other rights reserved by the author.

Agile and its Shortcomings

https://www.codingame.com/blog/agile-failed-peek-future-programming

This blog post by CodinGame provides a short history of development methodologies and goes on to make a critique of specifically Agile. It describes how, despite how widespread the methodology has now become, Agile has generally not succeeded as a methodology because of how it has been implemented by corporate management teams. While Agile as a methodology strives to be a set of principles that should guide a team to good practice and a healthy work environment, non-programmers use it as a tool to enforce hierarchical structures and rigid development. Most of what is said can likely also be applied to Scrum, but it is not explicitly mentioned.

This blog interested me because when I learned about Agile and Scrum, I always thought to myself, “Why would you ever not choose these methodologies? These seem far superior to outdated methods like Waterfall.” However, this post opened my eyes to how Agile really only works when implemented as was expected by those who wrote the manifesto. This post makes it very clear that what makes a methodology successful, or a team successful in general, is understanding its intent and being able to reflect on if its intent aligns with the work style of the team in question. Generally, I feel that if you’re a business leader who wants to have a rigid plan, then you should just follow a rigid plan like that of Waterfall, rather than creating a fake team experience with a smoke-and-mirrors version of Agile.

The post helped me reflect on what to look for in a well-functioning team. I think these insights can be very valuable for someone when they are looking for a place of work, as when you apply these critiques as a tool to analyze employers, it may be very apparent at some point in the process when a team is run by a group of developers and when a team is run by non-programmers enforcing a strict hierarchical system of development. I think this kind of resource would also be useful if ever in a position where one’s input is valued when evaluating how a team should handle itself, as it can be helpful in recognizing what are good tendencies in a team and what are bad tendencies, especially in a leadership position where hearing the whole team’s voice can be valuable. Being able to express why a decision may be bad is not only valuable for working in a team but also for working under management, as articulated thoughts may be enough to have an impact on their perspective as well.

This blog highlights the importance of understanding and respecting the intent behind methodologies like Agile; it serves as a notice of how we need to hold ourselves and team leaders accountable for how a team chooses to go about development.

From the blog CS@Worcester – CS ZStomski by Zachary Stomski and used with permission of the author. All other rights reserved by the author.

Microservices and Their Importance

The blog “Microservices Architecture for Enterprises,” authored by Charith Tangirala, provides a detailed summary of a talk by Darren Smith, a General Manager at ThoughtWorks Sydney, on implementing microservices architecture in large enterprises. The blog explores key considerations for transitioning from monolithic systems to microservices, including cultural, technical, and operational changes. It highlights critical topics such as dependency management, governance, infrastructure automation, deployment coupling, and the evolving role of architects in fostering collaboration between technical and business teams. By sharing practical insights, the blog offers a framework to assess whether microservices are suitable for specific organizational goals and challenges.

I picked this blog because we are currently learning about microservices and APIs in class. I wanted to explore how the concepts we are studying are applied in real-world scenarios and understand their practical importance. This blog stood out because it connects the theoretical foundation of microservices with the challenges and solutions encountered in large enterprises. By studying this resource, I aimed to gain insights into why microservices are a valuable architectural choice and what factors should be considered when implementing them.

One key takeaway from the blog was the explanation of “deployment coupling.” It was interesting to learn how monolithic systems often require synchronized deployments, while microservices, through the use of REST APIs, allow for independent service releases. This flexibility is one of the main benefits of microservices. At the same time, the blog points out operational challenges, such as the complexity of monitoring and managing numerous services, which requires a strong DevOps infrastructure. It reminded me that while microservices can provide agility, they also demand careful planning and strong operational practices.

Another important point was the emphasis on organizational culture. The blog highlights how the success of microservices depends on cross-team cooperation, education, and alignment. This reinforces the idea that architecture isn’t just about technology—it’s about how people and teams work together. It made me realize that adopting microservices is as much about communication and collaboration as it is about code. I’ve also learned that companies like Netflix and Amazon have already implemented microservices architecture, leading to significant success with their products. This real-world application of microservices by industry leaders shows how impactful this approach can be when implemented effectively, further inspiring me to learn more and apply these principles in future projects

As I move forward, I want to keep learning more about microservices and APIs and use what I’ve learned in future projects. My goal is to apply these concepts to real-world problems and build systems that are flexible and efficient. I also hope to use this knowledge in my future career as a Software Developer, where I can create scalable and innovative solutions.

Sources:

Microservices Architecture for Enterprises

Citation:

Tangirala, Charith. “Microservices Architecture for Enterprises.” http://www.thoughtworks.com, 13 July 2015, http://www.thoughtworks.com/insights/blog/microservices-architecture-for-enterprises. Accessed 23 Nov. 2024.

From the blog CS@Worcester – CodedBear by donna abayon and used with permission of the author. All other rights reserved by the author.

Optimizing Docker in the Cloud

After our recent studies relating to Docker managing dependencies and ensuring consistent development environments, I was interested in learning more about how to use it because I thought something like this could have saved me many hours of troubleshooting while completing a recent research project. This article, written by Medium user Dispanshu, highlights the capabilities of Docker and how to efficiently use the service in a cloud environment.  

The article focuses on optimizing Docker images to achieve high-performance, low-cost deployment. The problem some developers run into is having very large images which slow the build processes, waste storage, and reduce application speeds. I learned from this work that large images result from including unnecessary tools/dependencies/files, inefficient layer caching, and including other full images (like Python in this case). Dispanshu focuses on achieving the solution in 5 parts: 

  1. Multi-stage builds 
  1. Layer optimizations 
  1. Minimal base images (including Scratch) 
  1. Advanced techniques like distroless images 
  1. Security best practices 

Using these techniques, the image receives a size reduction from 1.2GB to 8MB! The most impactful change being multi-stage builds to which the writer accredits over 90% of this size reduction. I have never used these techniques before, but my interest definitely peaked when I saw the massive size reduction that resulted from these changes.  

The multi-stage builds technique references the build stage and the production stage. By using this technique build-time dependencies are separated from the actual runtime environment which avoids the inclusion of any unnecessary files or tools in the resulting image. Another technique recommends minimal base images using the slim or alpine version (for Python) over the full version for the build stage and for production stage it is recommended to use the scratch base image (no OS, no dependencies, no data or apps). Using a scratch image has pros and cons, but when we are considering image sizes and optimization this is an ideal route. 

Another interesting piece of this article is the information relating to advanced techniques like distroless images, using Docker Buildkit, using .dockerignore file, and eliminating any excess files. The way that distroless images are explained by the writer makes the concept and the use case very clear. The high-level differences between the Full Distribution Image, the Scratch Image, and the Distroless Image are described as the different ways we can pack for a trip:  

  1. Pack your entire wardrobe (Full Distribution Image) 
  1. Pack nothing and buy everything at your destination (Scratch Image) 
  1. Pack only what you’ll actually need (Distroless Image) 

The analogy makes understanding the relationship between these three image options seemingly obvious, but I can imagine applying any of these techniques described would require some perseverance. This article describes an architecture that juggles simplicity, performance, cost, and security with very impactful results. The results of this article are proof of the value these techniques can provide, and I will be seeking to apply them in my future work.

From the blog CS@Worcester by cameronbaron and used with permission of the author. All other rights reserved by the author.

Exploring OpenAPI with YAML

A blog from Amir Lavasani caught my attention this week as it perfectly aligns with topics we are currently focusing on in our course like working with OpenAPI Specification (OAS) using YAML. Working with REST APIs is relatively new for me and fully comprehending how these requests and interactions work is still in progress.  Lavasani structures this post as a tutorial for building a blog posts API, but there is a surplus of background information and knowledge provided for each step. Please consider reading this blog post and giving the tutorial a try!  

This tutorial starts as we all should with planning out anticipated endpoints and methods, understanding the schema of JSON objects that will be used, and acknowledging project requirements. This step, though simple, is helpful to remind us that it is vital to plan to ensure a clean and concise structure when we have implemented our work. Moving into the OAS architecture, Lavasani simplifies this into The Metadata Section, The Info Object, The Servers Object, and The Paths Object (and all objects that fall within). Authentication is touched on briefly, but the writer provides links to outside sources to learn more. For me, this post reiterates all the OpenAPI work we have completed in class with another example of project application and provides additional resources to learn from and work with.   

Most of the basics that this post touches on we have already reviewed in our course, the writer provides valuable information related to the minor details. A hint that was useful to me was the ability to use Markdown syntax in the description fields in OAS – this can ensure ease of use and understanding of the API. I also learned how the full URL is constructed based on the paths object. We know our paths are based on the endpoints we define and the operations (HTTP methods) they support, but to make sense of it all these pieces of information are concatenated with the URL from the servers to create the full URL. This is somewhat obvious, but seeing it spelled out as Lavasani does is very useful to fully reinforce what we know about these interactions. Another new piece of knowledge relates to the parameter object. I was not initially aware of all of the ways we can set operation parameters. We know how to use parameters via the URL path or the query string, but we can also use headers for this purpose or cookies which is useful to know for future implementations. Lastly, the writer mentions the OpenAPI Generator which can generate client-side and server-side code from an OAS – though I am not familiar with this service I can see it’s practicality and I will likely try to complete this tutorial and follow up with learning about the outside tools mentioned.   

This blog provides a practical example of working with the OpenAPI Specification, reinforcing concepts we’ve studied in class while providing useful insights. 

From the blog CS@Worcester by cameronbaron and used with permission of the author. All other rights reserved by the author.

The Best Linux Distro to Learn to Become a Hacker

This week, I will be talking about the “best” Linux Distro you should learn in order to get the most out of your hacking. This topic is widely debated and you will get many different answers from asking around, with people claiming different distros to be the “best.” Although that title can be slightly arbitrary, there are some specific distributions of Linux that are objectively better or overall more suited for hacking.

Kali Linux is often regarded as one of, if not the best distros to learn for hacking, because it was specifically designed for digital forensics and penetration testing. Developed by Offensive Security, this Debian-based distro remains a favorite among coders and hackers, and comes loaded with security testing tools, powerful programs, and applications that make life easier for people who want to become a hacker (or are one already). Although it can be a bit overwhelming, Kali is extremely helpful for beginners, because all of these tools are laid out for you and it helps you learn right away how to use them and what their capabilities are.

In the podcast, John mentions that he typically uses Ubuntu, and that he has people who ask, “John, why are you using Ubuntu when you could have been using Kali or Parrot OS?” He responds, “I think it’s really valuable to learn how to install those tools, learn how to configure those tools, watch, and see them break–because then you’ll be able to figure out how to fix them, and you’ll be able to troubleshoot and understand what are the packages, what are the repositories, how does this all work within Linux.” He believes that getting through the learning curve is worth it because it will ultimately be good for your own learning and growth. At the end of the day, each distribution of Linux is going to have its own strengths and weaknesses, and will be a little different from each other. Having knowledge and experience about these tools will allow you to use them when solving problems and will make you a better hacker.

I have never experimented with Kali Linux before, but I do have some experience when it comes to learning about hacking through Linux. There is a special, custom-made distribution of Linux called Tails, and like Kali, it is based on Debian. However, there is a very big difference between these two distros; while Kali seems to focus more on offensive hacking, Tails is more defensive and prioritizes both privacy and security through its unique interface. It is made to be booted as a live DVD or USB and never writes to the hard drive or SSD, instead using RAM, and leaves no digital footprint on the machine (unless told otherwise).

In conclusion, Linux is still somewhat unfamiliar to me, as I only have limited experience. I would like to learn more about Kali Linux in particular, but would also like to explore other distributions and learn about their potential.

Watch it here:
https://www.youtube.com/watch?v=T7AaBcNj-mA

From the blog CS@Worcester – Owen Santos Professional Blog by Owen Santos and used with permission of the author. All other rights reserved by the author.

Managing a product backlog within Scrum

With an honors project coming up for one of my courses I was going to have to learn how to become a single person Scrum team. With the average scrum team being seven to ten people, I knew it was going to be both a strange and difficult task.

I knew my first order of business would be to create a product backlog as I am the product owner (among many other things being the only member of the team). Diving in headfirst, I knew what a product backlog was but not how to set up an effective one.

Thankfully, “A guide on Scrum product backlog” by Brianna Hansen was the perfect blog to stumble across. She eloquently states what a product backlog is, why one should be maintained throughout a project, and how to create a product backlog geared towards success. As an added bonus the end of the blog even provides a platform to create and maintain a product backlog.

As I previously stated, I’ve known what a product backlog is. It’s everything that needs to be done for a product, including maintaining it. As much as a product backlog is a to-do list, one way to increase success is to not overload it. Keep it simple but effective. No one on the Scrum team (in this case me) wants to scroll through a product backlog for hours.

Time management is crucial for a product backlog. Certain items contained in the backlog are going to be more time consuming than others so considering this when putting product backlog items into the sprint backlog is very important to sprint success.

Defining the product vision is one of the major points she gives for maintaining a successful product backlog. This usually involves the whole team getting involved to make sure the vision for the product is shared. While in my case I may be the only member Hansen does give some very important questions for me to ask myself when planning my product and adding items to the backlog.

  • “What problem does the product solve?”
  • “Who are the target users or customers?”
  • “What unique value does the product offer?”

Taking these questions into consideration will help to guide me through this project and help to increase my chances of success.

Finding this blog was incredibly helpful for taking my first steps into trying Scrum firsthand and I intend to use what I learned as I navigate my honors project.

From the blog CS@Worcester – DPCS Blog by Daniel Parker and used with permission of the author. All other rights reserved by the author.

Blog Week 11

This week, I found an article that actually only recently came out about 8 days ago. The article, titled “Academic papers yanked after authors found to have used unlicensed software”, dives into two authors who had their paper retracted from an academic journal because of their use of a software in which they did not obtain a license from.

The authors used a software called “FLOW-3D” to help them calculate results exploring dam failures. When the parent company, appropriately named “Flow Science” found out about this, they made a complaint and were quickly able to have the papers taken down by  Elsevier’s Ain Shams Engineering Journal. The journal’s editor-in-chief even stated: “One of the conditions of submission of a paper for publication is that the article does not violate any intellectual property rights of any person or entity and that the use of any software is made under a license or permission from the software owner.”.

I find this to be an important issue that should be certainly talked about. This semester, we already learned of the importance of obtaining these licenses for a multitude of reasons, but I don’t believe we saw any real life examples of consequences if you didn’t. This article shows that, especially if required by a certain company, journal, or any sort of entity, using software or creating software and not properly getting a license for it can have repercussions. While I’m unsure if I’ll be writing an academic paper in the near future, I imagine at some point, if I further my education, I will. And if I do, and I need to use software in a similar manner to the people in the article used it, I will certainly make sure I am not breaching any sort of guidelines or laws set in place when it comes to the software licensing.

The article also mentions that, in 2023, over 10,000 research papers were retracted, which is a new record. I’m unsure if these were all due to licensing issues or if other issues were involved as well, but it still proves the point that it is taken very seriously and you can face the repercussions if you are not careful.

There are also many other types of consequences you may face by not getting licenses when it is necessary, but I think this article may be a good one to read especially if you plan on writing a paper or study at some point with which you used software to help solidify your paper. It is always good to be aware of what you need to do to be able to appropriately write a paper first, so you don’t have to deal with potential consequences if you don’t.

From the blog CS@Worcester – RBradleyBlog by Ryan Bradley and used with permission of the author. All other rights reserved by the author.

The Importance of Software Licenses

We spent a lot of time in class going over the different types of software licenses and the reasoning for why they are necessary, so I figured that it would make sense to dig deeper into the topic.

The blog that I selected introduces software license management as a way to strategically optimize and manage the usage of software within a company or organization. The blog post defines software license management (SLM) as the process of ensuring compliance within software licensing agreements but also minimizing any overspending and inefficiency throughout the entire process. The blog post also mentions the reasons for why SLM matters, as it points to the overall operational benefits that comes with proper SLM as well as the general legal and financial risks that are associated with non-compliance. Throughout the blog, the various types of licenses are all mentioned and explained, such as open-source, copyleft, and permissive licenses.

I selected this blog post because of its relevance to what we learned in class, as it gave a refresher on software licenses and also gave further elaboration on the overall subject. This blog also shows why software license management is important to a company, as it often leads to an increase in overall operational efficiency as well as compliance efficiency. This could be useful for me in the future as this article explained the importance of SLM in a workplace environment, which is very useful context to have.

This article gave me a larger understanding of software license management and the benefits of it. It also helped clarify my understanding of how companies tailor their strategies based on their operational needs, as the differentiation of software license types made this clear to me. I was very intruiged by the risks of not having SLM, as this can lead to a lot of inefficiencies as well as costly penalties for an organization, which is something that I never really considered thinking about. At the end of the article, it shows and lists a bunch of automation tools for SLM, which I found very interesting. This made me think about how technology can really simplify complex tasks, which can be both beneficial and detrimental at the same time for my future. For my future, I intend on applying this knowledge of software licenses and SLM potentially in future workplaces. To me, the benefits outweigh the negatives as implementing SLM will only increase efficiency within the company, which I think is generally a goal for most companies. Overall, this blog post did a great job at explaining the different types of software licenses, and showing me why software license management is so important, as well as why it is necessary when it comes to overall efficiency within an organization.

Source: https://whatfix.com/blog/software-license-management/

From the blog CS@Worcester – Coding Canvas by Sean Wang and used with permission of the author. All other rights reserved by the author.