Category Archives: CS-343

Embracing the Future of Coding with Gitpod

In the ever-evolving landscape of programming, developers often find themselves grappling with unforeseen challenges. Recently, I encountered one such obstacle during my computer science class, which eventually led me to explore a game-changing solution – Gitpod. This is the story of how Gitpod came to my rescue when Docker, a crucial part of an in-class activity, failed to run on my M1 Mac Pro, and my professor recommended Gitpod as a solution.

The Problem

As the class assignment required me to work with Docker containers for a web application, I was excited to get started. However, I hit a roadblock when I attempted to use Docker on my M1 Mac Pro. Due to compatibility issues with the M1 chip, I found myself unable to run Docker containers, leaving me perplexed and frustrated. My professor, understanding my predicament, suggested an alternative solution – Gitpod.

A Quest for a Solution

Eager to continue with my assignment and make the most out of the in-class activity, I immediately began researching Gitpod. This cloud-based development environment promised to provide a solution to my Docker woes while offering a host of other benefits. Without further ado, I dived into the world of Gitpod.

The Gitpod Experience

Upon signing up for Gitpod, I discovered the true power of cloud-based development environments. Gitpod provides an intuitive interface that closely resembles VSCode, making the transition remarkably smooth. It offers a wide range of programming languages and frameworks, ensuring that it can cater to almost any development needs. One of the most remarkable features of Gitpod is its ability to create a development environment based on a Git repository, making collaboration with peers more efficient.

Additionally, Gitpod’s integration with GitHub is seamless, as it allowed me to work directly with my project’s repository. This feature made it easy to commit and push code changes, ensuring that my work was well-organized and readily accessible.

The Benefits of Using Gitpod

  1. Compatibility: Gitpod works seamlessly on M1 Macs, resolving the compatibility issues that I faced with Docker.
  2. Accessibility: Gitpod is accessible from anywhere with an internet connection, which is a major advantage, especially for students and developers who are always on the move.
  3. Time Efficiency: Gitpod’s pre-configured environments saved me hours of troubleshooting and setup, enabling me to focus on coding and project development.
  4. Scalability: As my project expanded, Gitpod effortlessly scaled to accommodate my needs without compromising performance.
  5. Collaboration: The collaborative features of Gitpod made working with my classmates on group projects a breeze. We could effortlessly share and collaborate on code in real time.

Conclusion

In the end, the incompatibility of Docker with my M1 Mac Pro might have been a setback, but my encounter with Gitpod was nothing short of transformative. It provided a solution to my immediate problem and introduced me to a world of cloud-based development environments. Gitpod has become a valuable tool in my programming arsenal, and I encourage every developer to explore its capabilities.

References:

  1. Gitpod – The Dev Environment Built for the Cloud
  2. Visual Studio Code – Docker Extension
  3. GitHub – The World’s Leading Software Development Platform

From the blog CS-343 – Hieu Tran Blog by Trung Hiếu and used with permission of the author. All other rights reserved by the author.

CS343 Blog Post for Week of November 6, 2023

This week, I wanted to continue writing about the SOLID software design principles. We’ve reached the letter “I”, which represents the Interface Segregation Principle. I’ve been noticing a theme across the SOLID design principles that as many parts of your software as you can manage should be independent of one another, so that modifications can be made to a part without compromising the whole. The Interface Segregation Principle brings the relationship between the software user and designer into focus.

Once again defined by Robert C. Martin, the Interface Segregation Principle is “Clients should not be forced to depend upon interfaces that they do not use.” This principle goes hand-in-hand with the previously defined Single Responsibility Principle, which declares that “A class should have one, and only one, reason to change.” The goal of abiding by these principles is to produce code that is resilient to future modifications and fostering that resilience by building independent software components.

The article provides the real-life example of a years-old piece of software being iterated on. The designers may want to add new methods to existing interfaces, even though that may introduce new dependencies to the interface and violate the Single Responsibility Principle. The author introduces the term “interface pollution”, referring to the phenomenon of existing interfaces becoming cluttered with new methods that introduce new responsibilities to the interface, rather than building new interfaces that handle those responsibilities.

The author provides a practical example of this principle through another Coffee Machine implementation. A new EspressoMachine class is proposed but requires a new method that the BasicCoffeeMachine interface doesn’t include. The problem of interface pollution is illustrated simply in this example, when a brewEspresso() method is added to the CoffeeMachine interface to support the new EspressoMachine class. No other kinds of CoffeeMachine would use this method. This approach also introduces the issue that BasicCoffeeMachine could try calling brewEspresso(), or EspressoMachine could try calling brewFilterCoffee(), and either case would throw an exception.

The solution in this case is to create new interfaces from the existing CoffeeMachine interface, FilterCoffeeMachine and EspressoCoffeeMachine. This way, only the methods required by either type of concrete CoffeeMachine class can be accessed. This approach strengthens the independence of the concrete classes. If a new CoffeeMachine design demands methods from both interfaces, it can simply implement from both.

I chose to write about this topic because I’m still learning how to assign responsibilities to different parts of my software. Having interfaces that contain too many methods rather than continuing to further specialize them is an issue I’ve faced in software that I’ve written myself. Studying the SOLID principles and actively applying them to my work will help save me a lot of time and effort in my future projects.

From the blog CS@Worcester – Michael's Programming Blog by mikesprogrammingblog and used with permission of the author. All other rights reserved by the author.

Exploring the Significance of REST APIs

REST APIs: How They Work and What You Need to Know: https://blog.hubspot.com/website/what-is-rest-api

Introduction: In the ever-evolving landscape of software applications, the demand for seamless data sharing and communication has propelled the prominence of Representational State Transfer Application Programming Interfaces (REST APIs). The blog post titled “What is REST API?” serves as a comprehensive guide, shedding light on the fundamental concepts, principles, and practical applications of REST APIs.

Reason for Selection: The selection of this resource stems from its ability to provide a clear and concise overview of REST APIs, making it accessible for both beginners and those seeking a refresher. The blog’s structured approach, starting with basic terms and progressing to the principles of REST, makes it an ideal starting point for anyone looking to understand the role of APIs in modern software development.

Content Overview: The blog begins by defining key terms such as clients, resources, and servers, setting the stage for a nuanced understanding of REST APIs. It then introduces the concept of REST as a set of guidelines facilitating internet communication for efficient integrations. The six rules of REST APIs, including client-server separation, a uniform interface, and statelessness, are elucidated, providing a solid foundation for grasping the core principles.

Reflection on Material: The content not only explains what REST APIs are but also delves into the reasoning behind each rule, offering insights into their importance. The emphasis on statelessness, for instance, is justified by the reduction in server memory requirements and improved scalability. The layered system principle is exposed, highlighting the role of intermediary servers without disrupting client-server interactions.

Applicability in Future Practice: Understanding REST APIs is crucial for anyone venturing into software development or related fields. The blog’s breakdown of HTTP methods, URLs, and the common language of communication, HTTP, provides practical knowledge that a student can directly apply in API development. The explanation of caching and its role in enhancing server efficiency offers a valuable insight that can be leveraged to optimize web applications.

The blog’s real-world examples of REST APIs, such as Twitter, Instagram, Spotify, and HubSpot, illustrate the versatility of REST in various domains. For a student aspiring to build applications with social media functionalities or integrate music-related features, this resource serves as a roadmap for leveraging existing APIs effectively.

Conclusion: In conclusion, the blog post serves as an excellent resource for grasping the fundamentals of REST APIs. Its adherence to the recommended length, coupled with its clear and informative content, makes it a valuable asset for students and professionals alike. The insights gained from this resource can empower students to navigate the complex world of API development with confidence, laying the groundwork for successful future practices in software development.

From the blog CS@Worcester – Site Title by rkaranja1002 and used with permission of the author. All other rights reserved by the author.

BLOG #2

hey everyone,

My name is Abdullah Farouk, as I stated before, and this is going to be my second blog for this semester. We have been using this website called Gitlab and I remember last year using Github, So I wanted to do my own research to see what these sites are about and why the professors are using it. I know a little bit, the basics, of what the sites do and how they function, but this will get me more in depth information on the sites. Both of these sites have Git in their name so let’s start by explaining that. Git is a free and open source distributed version control system designed to handle everything from small to very large projects with speed and efficiency.  version control means that it has a “history” tap where it can track your changes and even has the ability to go back to those versions. This is helpful to everyone but especially to coders because you can always go back to the version that works if you are stuck. For example, my code was working normally so I saved that version and tried to do extra stuff to my code to make it function with more detail and somehow, somewhere I messed up the code and it broke the whole program. What I did was I pulled the old version of my code that was working before and just went on based off that. So Gitlab / Github allows anyone to create a remote, public repositories on the cloud of free. You can share your projects with your classmates or anyone that you want to have access to your project, and they can pull it to their computer and make changes and it will save on the cloud. You can fork data with the owner’s permission and have a copy of it on your computer. You can also get a copy of someone’s project and build off of it without it changing the owner’s code.  This makes teamwork easier and more efficient, so I see why the professors use this. This also allows multiple people to work on the same project, at the same time and they can just combine their code into one big project at the end. Gitlab also allows you to use third-party tools like slack, and can store and manage docker images, which make its easier to with those applications.                

This is the reference article I used to get the information for my blog: https://blog.hubspot.com/website/what-is-github-used-for#what-github

From the blog CS@Worcester – Farouk's blog by afarouk1 and used with permission of the author. All other rights reserved by the author.

Enhancing Development Environments with Dev Containers

When you work in a codespace, the environment you are working in is created using a development container, or dev container, hosted on a virtual machine.

Dev Containers are Docker containers that are specifically configured to provide a fully featured development environment. They are a crucial part of our course material, and this blog post will explore their significance and functionality.

Summary of Dev Containers

Dev Containers are isolated development environments created in virtual machines, ensuring a consistent setup for everyone working on a project. They can be customized to match the specific requirements of a repository, enabling the inclusion of frameworks, tools, extensions, and port forwarding.

Why Dev Containers Matter

The reason for selecting Dev Containers as our topic is simple: they are a game-changer for modern software development. With write permissions to a repository, you can create or edit the codespace configuration, allowing teams to work in a tailored environment. They eliminate the “it works on my machine” problem, providing a consistent, reproducible setup for all team members.

Reflection on Dev Containers

As I delved into this topic, I was particularly struck by the flexibility and versatility of Dev Containers. The ability to create custom configurations, use predefined setups, or rely on default configurations makes it adaptable to various scenarios. It’s not just about convenience; it’s about ensuring that development environments are well-structured and efficient.

What I Learned

One key takeaway from studying Dev Containers is the importance of clear and standardized development environments. This is especially vital in large repositories that contain code in different programming languages or for various projects. It’s not just about having the right tools; it’s about having them consistently available to every team member.

The use of Dockerfiles, referenced in the devcontainer.json file, is another fascinating aspect. Dockerfiles allow you to specify the steps needed to create a Docker container image, making it easy to reproduce your development environment.

Applying What I Learned

In my future practice, I plan to leverage Dev Containers in collaborative projects. The ability to define a single dev container configuration for a repository or multiple configurations for different branches or teams is a feature I intend to use strategically. By tailoring development environments to the specific needs of a project, I aim to improve productivity and ensure that every team member has the tools they require.

Resource Link

Introduction to Dev Containers

In conclusion, Dev Containers are a powerful tool in modern software development, ensuring consistency, efficiency, and collaboration within development teams. By understanding how to create, customize, and apply Dev Containers, we can take our projects to the next level and tackle complex coding challenges more effectively. This topic directly relates to our course material, offering practical knowledge that can be applied in real-world scenarios.

From the blog CS@Worcester – Abe's Programming Blog by Abraham Passmore and used with permission of the author. All other rights reserved by the author.

CS343 – Week 8

API, or application programming interface, is a set of specifications that allow applications to interact with one another. They enable transfer of information and is treated as an intermediary level let companies open their application data to external third-party developers as well as internal departments within the company. An example of how APIs work is third-party payment processing. For instance, when making a payment online and being given an option to “Pay with PayPal”, the connection made between the third-party payment system and the website where the payment is being made is reliant on APIs. When selecting the option, an API calls to retrieve information or request. After processing the request from the application to the web server, the API makes a call to the external program/web server. In this situation, it would be the third-party payment system. The external system then sends the requested information back through the API to transfer to the initial requesting web server.

REST is a set of design principles and stands for “representational state transfer” architectural style. Sometimes referred to as RESTful APIs, they were first defined in 2000 by Dr. Roy Fielding and provides more flexibility and freedom for developers. They can support many different data formats and are required to follow the 6 principles, also known as architectural constraints. The six principles are the following: uniform interface, client-server decoupling, statelessness, cacheability, layered system architecture, and code on demand (optional).

Uniform interface means that all API requests from the same resource should have identical formats to ensure that a piece of data belongs to only one uniform resource identifier (URI). The resource should not be too large while still containing the information the client needs. Client and server applications must be completely independent of each other in REST API design. The client application only needs to know the URI of the requested resource and cannot interact with the server in any other ways. Statelessness means that each request needs to include all the information necessary for processing it, which means REST APIs do not require any server-side sessions. Resources should be cacheable on the client or server side and the server responses should contain the needed information about whether caching is allowed for the delivered resource. The purpose of this is to improve performance on the client side while also increasing scalability on the server side. REST APIs have different layers to them that calls and responses go through. To note, do not always assume the client and server applications connect directly to each other. These APIs need to be designed so that neither the client nor the server can tell whether it communicates with an intermediary or end application. In certain cases, responses can contain executable code when they usually send static resources. In the case that an executable code is added (such as Java applets), the code should only run on-demand.

What is a REST API? | IBM

From the blog CS@Worcester – Jason Lee Computer Science Blog by jlee3811 and used with permission of the author. All other rights reserved by the author.

rest query

As a computer science student, my recent deep dive into REST filters and queries has been an exciting journey, and I’m eager to share my newfound knowledge on my blog. This educational adventure has been made all the more engaging by drawing inspiration from a renowned source, “The Web Developer’s Guide to REST APIs,” an authoritative guide known for its in-depth insights. By harnessing the capabilities of REST filters and queries, I can adeptly extract, manipulate, and interpret data from various real-world APIs, offering invaluable insights for my web development endeavors.

The power of REST filters and queries becomes abundantly clear when applied to an API or database. These tools empower me to refine, categorize, sort, and arrange data with utmost precision. A simple yet potent filter allows me to isolate a specific subset of data, such as products falling within the “Beverages” category, using a query like: /products?filter=category eq 'Beverages'. This not only trims down the dataset returned but also permits a laser focus on the data that is most pertinent to my objectives. Furthermore, I can apply the $orderby parameter to arrange the results in ascending or descending order, perhaps alphabetically or by price, for a comprehensive understanding of the data landscape. This degree of command over data proves instrumental, especially when dealing with voluminous datasets in real-world applications.

REST queries, in addition to their filtering and sorting prowess, offer the capability to conduct intricate data operations. Consider, for instance, the need to identify the top 5 customers who have placed the highest number of orders. By adroitly combining the $filter and $orderby parameters, I can craft a query like: /customers?$filter=orderCount gt 0&$orderby=orderCount desc&$top=5. This flexible and nuanced data manipulation capacity is a linchpin in my decision-making process as a developer and is a skill set crucial for me to master. Consequently, anticipate an insightful journey as I delve deeper into the realms of REST filters and queries, leveraging the wisdom from “The Web Developer’s Guide to REST APIs,” to enhance my data manipulation expertise.

From the blog CS@Worcester – Andres Ovalles by ergonutt and used with permission of the author. All other rights reserved by the author.

Deciphering the Layers of Web Systems: Front End, Back End, and Data Persistence (Week-8)

In the expansive digital landscape, the seamless operation of web systems relies on the harmonious integration of front end, back end, and data persistence layers. Each of these foundational elements plays a pivotal role in delivering the interactive and dynamic experiences we’ve come to expect from modern web applications.

Front End Development: Crafting the User Interface

The front end is the visible part of the iceberg, the user interface that we interact with directly. Front-end developers are like the set designers of a play, creating the visual elements that users engage with—the text, images, buttons, and more. With HTML for structure, CSS for style, and JavaScript for interactivity, they construct responsive and adaptive interfaces that provide a consistent experience across various devices.

Frameworks such as React, Angular, and Vue.js have transformed the landscape of front end development. They offer developers powerful tools to create dynamic interfaces that respond to user interactions in real time. A core principle of front end design is accessibility, ensuring that web applications are inclusive for all users, including those with disabilities.

Resources for Front End Development:

Back End Development: The Engine Behind the Interface

Lurking beneath the surface, the back end is where the application’s core logic resides—the server side. It is the engine room that powers the application, handling data processing, API calls, and server management. The back end is the realm of languages such as Python, Java, Ruby, and server-side frameworks like Node.js, which allows for JavaScript to be used on the server, facilitating full-stack development practices.

The back end manages interactions with the database, business logic, authentication, and serves the appropriate data to the front end. Effective back end development ensures that web applications are fast, secure, and scalable.

Resources for Back End Development:

Data Persistence Layer: The Database

At the base of the web system lies the data persistence layer, akin to the foundation of a building. This layer is where databases live, tasked with storing data so that it remains available for future retrieval. Depending on the application’s requirements, databases may be relational, such as MySQL and PostgreSQL, or non-relational, like MongoDB.

The database is crucial for storing, organizing, and retrieving data efficiently. A well-designed database supports the application’s needs for speed, reliability, and integrity, allowing for high-volume transactions and secure storage of sensitive information.

Resources for Data Persistence:

Conclusion: The Symphony of Web Development

Developing a web system is akin to orchestrating a symphony, where each section plays its part in creating a beautiful harmony. The front end appeals to the senses, the back end conducts the operation, and the data persistence layer ensures the longevity and integrity of the performance. Understanding these distinct layers and their integration is crucial for web developers who aspire to create robust, user-centric, and efficient web applications.

Together, these components form the infrastructure of the digital experiences we encounter daily. A solid grasp of each layer’s role, challenges, and tools not only equips developers with the knowledge to build effective web solutions but also the insight to innovate and push the boundaries of what’s possible in the web’s ever-evolving narrative.

From the blog CS@Worcester – Kadriu's Blog by Arber Kadriu and used with permission of the author. All other rights reserved by the author.

What Is Docker?

And How is it Useful?

As I work on more and more assignments I am constantly hit with the same error while starting up my work for class:

The fix to this is very simple; Start up Docker. This error simply means that my VS Code is attempting to open a container, but I don’t have Docker running to do that. As this reminder started to get me in the habit of making sure Docker was open before attempting to work in a container, I started to ask myself, “What is Docker actually doing and what is a container?”.

I found an excellent post that breaks down what Docker is and how each of the individual parts of Docker work together. The blog, “Introduction to Containers and Docker” by Liam Mooney (https://endjin.com/blog/2022/01/introduction-to-containers-and-docker) provides great information of how Docker used containers with an example of a dockerfile and how to build your own container.  Mooney starts off with explaining what a container is and how they vary from virtual machines. Containers and virtual machines are both ways of creating an isolated environment on a computer, however, virtual machines are bulky and slow in comparison to containers. This means they can both be used to create stable environments to run software in, however, virtual machines require an image of an OS to be installed on a host computer. This OS also needs to be installed every time you start the virtual machine. This paired with taking up a lot of space and CPU resources by having two OS’s that share a lot of features leads to much longer start up times and slower runtime. Containers use the capabilities and features of the host OS to run the environments. This makes them much lighter, only needing select software and dependencies to be included in the environment.

This blog goes on to explain how Docker is able to use containers to create environments in seconds. Docker uses dockfiles, which are a list of commands the docker daemon will execute to build the image you wish to run for your environment. Docker daemon is a background service that builds, manages, and runs the environment by being the middleman between the container and the host OS. Once the image has been built, it can be opened in any number of containers that are all independent of each other.

The examples given by Mooney are great for understanding exactly how you would create a simple container. Giving me a better look at what Docker is actually doing when I am working in VS code and its opening containers for me to work in. Although I don’t see myself designing my own containers anytime soon, it is great to know how the software is executing and managing these environments.

From the blog CS@Worcester – CS Learning by kbourassa18 and used with permission of the author. All other rights reserved by the author.

What is Refactoring?

Refactoring is the process of restructuring a code without changing or adding to its functionality and external behavior. There are a lot of ways to go about refactoring but it mostly goes towards applying standard basic actions. These changes in the existing code save the software’s functionality and behavior since the changes are so tiny, they are less likely to cause any errors. So, what is the point of refactoring? Well, refactoring is to turn any messy confusing code into a clean understandable one instead. When a code is messy it means that the code is hard to understand and maintain, when it’s time to add a required functionality it causes a lot of problems with the code because it’s confusing already. With a clean code, it’s easier to make any changes and improve on any problems. Also with a clean code anybody who ever works with the code is able to understand the code and can appreciate how organized it is. When a messy code isn’t cleaned up it can affect any feature developments because developers have to take more time to understand the code and track the code so that they can make any changes themselves.

Knowing when to refactor is important and there are different times to refactor your code. Like refactoring, while you’re reviewing the code, reviewing the code before it goes live is the best time to refactor and make any changes you can before pushing it through. You can also schedule certain parts of your day to refactor your code instead of doing it all at once. By cleaning your code you are able to catch any bugs before they create any problems in your code. The main thing about refactoring your code is that cleaning up a dirty code can reduce technical debt. Clean code is easier to read and if anybody else besides the developer works on that code they are also able to easily understand that code as well as maintain and add features to it. The less complicated the code is, it can lead to improved source-code maintenance. With a clean code, the design can be used as a tool for any other developers it can become the basis of a code elsewhere. This is why I believe that refactoring is important because changing just the smallest piece of code can lead to a better functional approach to programming. It helps developers get a better understanding of the code as well as making better design decisions.

From the blog CS@Worcester – Kaylene Noel's Blog by Kaylene Noel and used with permission of the author. All other rights reserved by the author.