Category Archives: Week-14

Why is Git popular among Version Control Systems

One of the interesting blog articles I found is about different version control system developers used, in the process of managing software over time. During this time, when we’ve been mostly using Git in this process, this article talks about different version control systems, other than Git, has been existed in the past. Two of the version control systems the article mentions are Apache Subversion (SVN) and Mercurial. The article gives the overview of existence from previous years, in terms of Apache Subversion, it’s the system that maintains source code in a central server, as well as how it works great for a centrally located team. And in terms of Mercurial, it has its own easy access for most developers to hosting through Fog Creek Software, which is now Glitch.

The reason I choose this blog post is to learn more about the existence of other version control systems that appear alongside Git, as well as the advantages of those systems, and how each of the systems appeared to be the top choice among the developers over time. When we only focus on Git throughout the course, I personally can understand the structure where everybody can fork, clone, and branches in writing code, then contribute to the change of the repository. I also learn that git is more easier to use when managing version control through issues, commits and pull requests, where I found it more interactive and highly valuable in teamwork and collaboration.

Therefore, for the other version control systems, although such as the structure in Apache Subversion is about the same as how we use Git, the dependent on a centralized SVN server could bring less agile when committing changes to the overall repository. According to Quentin Headen, in summary, the centralized SVN server will also require the network connection to be always running in order to commit changes to the repository, or otherwise you can’t commit at all. The second drawback that they also have mentioned, is the heavy branching system, where branches are difficult to remove, or it could be impossible to remove the branch at all. In my opinion, this is another clear perspective to learn that there are disadvantages when hosting a repository on a centralized server, while a distributed version control system would be preferred, giving the developers the flexibility when working on the codebase to address the issues that centralized version control systems occurred.

After reading this blog article, I learned more about the two types of version control system, which are centralized and distributed version control. Although Git is popular dues to its strong platform and built-in user base, others could choose the centralized system for enterprise teams in terms of scalability. In my opinion, it would still depend on which type of project should I work on, and choosing the preferred version control system will help me easier in keeping track of project developments, ensuring the version is up to date and accessible for all users.

Link to Blog Article: https://stackoverflow.blog/2023/01/09/beyond-git-the-other-version-control-systems-developers-use/

From the blog CS@Worcester – Hello from Kiet by Kiet Vuong and used with permission of the author. All other rights reserved by the author.

Our Approach to Testing – Rich Rogers

In this blogpost, Rich Rogers, a Testing Capability Lead for Scott Logic, discusses how the people there approach testing as part of their Development and Delivery process, particularly through their 6 principles. He heavily emphasizes context as the first principle, and how when testing, things will always change. You can’t standardize testing, because every project will require different tests. In many ways, this resembles the Agile ideas that we’ve discussed in class, specifically the portion about “Responding to Change over Following a Plan”, and also goes hand in hand with another one of Roger’s principles, which is “Risks over Requirements”. I completely agree that you may be given a set of requirements as a team, and it may be satisfactory for a customer to fulfill these requirements, but there is value in looking beyond just the requirements and exploring other risks or potential problems that may not have been stated. A plan exists as a guideline, not a strict rule.

Besides those two principles, the remaining ones are: “Value in Tooling”, “Quality for Humans”, “Bring a Testing Mindset”, and “Collaborate and Cross-skill”. To elaborate, the blog discusses Value in Tooling as understanding the tools you have, and taking opportunities to run repeatable automated checks when applicable, so long as they are efficient in terms of cost and effort. Quality for Humans refers to the notion that at the end of the day, humans will be the ones using these tools. The goal is to provide something that humans will be satisfied with, and will be accessible for a person to use. In some manner, this resembles the Agile principle of Customer Collaboration, or even the Individuals or Interactions part of Agile. The Testing Mindset principle is a little more broad, in that it refers to a questioning mindset that is aligned with wanting the product to succeed. Every tester has a unique way in which they approach testing, and so long as the end goal aligns, every mindset is valid. Collaborate and Cross-Skill here refers to the notion that, while the industry encourages individual testing, understanding your team’s skills and working to complement each other can be helpful.

Ultimately, I think these principles can be summed up as be flexible, very similar to how Agile works. A willingness to understand and use tools in testing, taking the human aspect into account, a willingness to approach things differently and apply a level of curiosity and questioning, and a willingness to collaborate with others, especially those with skills and expertise that vary from yours, are all examples of disregarding the rigid plans and processes. To do your best work, you must be willing to approach anything in multiple ways and with multiple mindsets. Having chosen this blog post because of it’s insight into testing, I definitely find myself agreeing with the overall principle behind this testing approach. I don’t know how much control I will have over testing in the future, but I would certainly like to apply a similar approach to how I test things in the future.

Blog Link: https://blog.scottlogic.com/2024/10/30/our-differentiated-approach-to-testing.html

From the blog CS@Worcester – Justin Lam’s Portfolio by CS@Worcester – Justin Lam’s Portfolio and used with permission of the author. All other rights reserved by the author.

Best Practices of REST API Design

I chose the blog post, “Best Practices for REST API Design” by John Au-Yeung because it addresses the best practices developers should be following when it comes to utilizing REST API. The blog shows us strategies that we can use so we can create items to the best of our abilities. In class, we have been using the REST API since Thea’s Pantry utilizes it. Due to this, while we have been learning a lot about it due to classwork and homework, it was interesting to be able to read other perspectives such as this blog. This is really our first introduction in our computer science classes on design like this so the more we can read and learn the better.

In the blog, the author focuses on creating user-friendly APIs that adhere to widely accepted principles. The foundation of REST API design is using nouns in endpoints to represent resources, such as /users or /orders, rather than actions like /getUser. This approach keeps the API intuitive and aligns with REST conventions. HTTP methods play a vital role, with verbs like GET , POST , PUT , and DELETE defining the operations on these endpoints. The principle of statelessness is key to this design, meaning each request from a client must contain all the necessary information for the server to fulfill it. This avoids maintaining client-specific state on the server, simplifying scaling and debugging. Error handling is another essential practice. APIs should return meaningful and consistent HTTP status codes, such as 404 for “not found” or 400 for “bad request,” paired with descriptive error messages to guide users on fixing issues. For managing large datasets, pagination, filtering, and sorting should be supported. These features enhance performance by limiting the data returned and allowing clients to specify exactly what they need. APIs should adopt JSON as the standard response format, as it’s widely used and easy to parse. Including appropriate content-type headers ensures compatibility across platforms. These practices foster better user experiences, maintainability, and scalability. By following them, developers can create APIs that are reliable, predictable, and efficient, promoting successful integrations across diverse client applications.

From the blog, I was able to learn the best practices when it comes to designing using REST API. Going forward, I plan to incorporate these practices as I continue to learn more about front end work. After reading, I feel like I will be able to increase my learning in this area as well as be able to share these practices with my peers.

https://stackoverflow.blog/2020/03/02/best-practices-for-rest-api-design/

From the blog CS@Worcester – Giovanni Casiano – Software Development by Giovanni Casiano and used with permission of the author. All other rights reserved by the author.

The Importance of Software Maintenance

In this blog post, I will be talking about why software maintenance is important in an organization setting. The blog post that I am referencing is titled “Why Software Maintenance is Necessary,” and is published by Radixweb. The blog highlights four types of maintenance, with those being corrective, adaptive, perfective, and preventative. For each type, it is explained how it contributes to th elong-term functionality of a software, as corrective maintenance involves fixing bugs or errors, while adaptive maintenance ensures that the evolving technologies will be compatible. Furthermore, perfective maintenance focuses on enhancing the overall performance and usability of the software, while preventative maintenance focuses on lessening any potential issues before they can occur.

The blog also mentions how software maintenance is directly aligned with the goals of a business or organization. If a company were to neglect software maintenance, that would lead to vulnerabilities in their security, as well as potential technical debt. This is an interesting point, as it makes it very clear how important software maintenance is to an organization, as neglecting it would lead to major problems.

I selected this blog because it relates to our discussions in class regarding software maintenance. We touched on it in class, but I wanted to take a deeper look into it and research more about its importance. Through my research, my knowledge regarding software maintenance expanded, as I was able to learn even more about the details of it and why it is important in a professional setting.

After reading this blog post, I realized that maintenance can sometimes be overlooked in a group setting, as it becomes something that gets done automatically but is never really focused on. Maintenance kind of feels like something that is only done when something is broken, but in reality is should be something that is done regularly and often. I feel like maintenance is something that should be very high priority, as neglecting it can heavily damage companies, and can lead to bad habits of employees in a company all neglecting it, with all of them basically pushing the task to someone else instead of doing it themselves.

Moving forward, I plan to put a bigger focus on preventive maintenance in my professional career. It could be as a manager or as a team member, but I will always look to allocate time towards maintenance. This can be done in multiple ways, as you can conduct maintenance checks for compatability regularly, or even schedule audits every once in a while for preventive maintenance.

Link: https://radixweb.com/blog/why-software-maintenance-is-necessary

From the blog CS@Worcester – Coding Canvas by Sean Wang and used with permission of the author. All other rights reserved by the author.

GitHub and Docker: Streamlining Database Management for Modern Development

In AI based fast growing world of software development, very deep knowledge database management plays a pivotal role in ensuring application performance and scalability. GitHub and Docker have become indispensable tools for developers, providing streamlined workflows and efficient environments for database development, testing, and deployment. This blog explores how GitHub and Docker work together to simplify database management in today’s world.

GitHub, a leading platform for version control and collaboration, is key part is managing database code, schemas, and migration. For hosting configuration files, and database-related repositories, GitHub using one source to get maximum database workflows. GitHub also fulfill the software tester and developer related tools to easy to convert code and data process without any lengthy process. Giving branching, pull request, and code reviews facilities actually make GitHub performances very advanced in machine learning world. Version control with actual data track with their schemas, collaboration with multiple contributors and Integration with CI/CD Pipelines provides key benefits of GitHub database. Where Docker, the development and testing of databases is being transformed by a packaging platform. Docker enables developers to reproduce production-like environments on local computers by enclosing databases within containers, guaranteeing stability across the stages of development, testing, and deployment. Environment Consistency, Isolated containers and scalability provide key features of docker which give real support in testing team so we can easily grow with our GitHub system.

When combined, GitHub and Docker provide a robust solution for managing database workflows.

  1. Versioning and Collaboration with Docker Files:

Docker files and Compose files, essentials for databases, are stored in GitHub repositories. Developers can version-control these files, and automate container builds via GitHub Actions.

2. Automated Testing:

Developers can easily supply files with version control and creating pipelines so spin up actual data for their multiple automated testing.

3. Database Migrations as Code:

Teams store migration scripts in GitHub, while Docker containers provide isolated environments to test these scripts. Reliable schema modifications in staging and production settings are guaranteed by this method.

Advantages of Using GitHub and Docker for Databases:

Reduced Onboarding Time: Learners can start working with prebuilt Docker containers without any work delays.

Improved Testing: Automated tests run against containerized databases, ensuring thorough validation of database changes.

Enhanced Collaboration: Efficient team workflow, while Docker guarantees consistency of the surroundings.

In conclusion, GitHub and Docker together form a powerful duo for modern database management, addressing challenges like environment consistency, version control, and collaboration. For small project to build large applications these two combos give detailly work and improving features in all workers. GitHub and Docker will continue to redefine how databases are managed in the software development lifecycle.

Citations:

  1. GitHub Actions Documentation. (n.d.). https://docs.github.com/en/actions

2. Docker Documentation. (n.d.). https://docs.docker.com

From the blog CS@Worcester – Pre-Learner —> A Blog Introduction by Aksh Patel and used with permission of the author. All other rights reserved by the author.

Object Oriented Programming – Abstraction, Encapsulation, Polymorphism, and Inheritance

Within object oriented programming, there are four main pillars. These are known as abstraction, encapsulation, polymorphism, and inheritance. These four are essential in understanding object oriented programming and why it is important. While researching, I found a blog called “Encapsulation, Abstraction, Inheritance, and Polymorphism” by Cole Davis which I believe does a great job at explaining all four of the pillars as well as why they are important. I chose to write about this topic as I use object oriented programming all the time, and I plan to do it in the future. Because of this, I wanted to help share some information that I find to be very useful in understanding how it works in case anyone else wants to do the same.

Abstraction: One of the first major pillars you’ll learn about is known as abstraction. Cole Davis does a great job at explaining this pillar, as shown in a quote from his blog: “Abstraction is the process of combining many functions into one. Think of a thermostat. Typically, a thermostat allows the user to change the target temperature, select different modes such as heating, cooling, or fan, and turn the unit on or off. When we use a thermostat, we are unaware of the intricacies that create these functionalities under the hood. By exposing only the necessary abstracted functions to the user, we make it easier for the user to use our programs.” I really enjoyed reading this example as it relates abstraction to real-life terms instead of just using coding terms, making it a lot easier to understand. Essentially, abstraction does the same thing. It makes our code easier to understand, allowing others to get a high-level understanding of our program.

Encapsulation: The second main pillar is known as encapsulation. Encapsulation is the idea of hiding and restricting access to the implementation details of our objects. Basically, this protects the data and functions of our code from being improperly accessed by things other than our objects. It makes our code more robust and predictable, allowing others to see its purpose more clearly. Another major benefit of encapsulation is it allows us to see precisely where we can change implementation details, allowing us to safely change our program.

Inheritance: The third main pillar is known as inheritance. According to the blog “Inheritance is a technique that involves a child class “inheriting” functionality from a parent or super class.” This increases usability in our code as well as stops it from being redundant.

Polymorphism: The four main pillar is known as polymorphism. Polymorphism is a hard one to explain, but its very easy to show. Essentially, it is when child classes run the same inherited method that returns different values. They use the same method, but can return different values based on what they do. Polymorphism allows us to have a more dynamic inheritance, which enables us to use inheritance more for its values that it provides.

Link: https://medium.com/@colebuildanddevelop/encapsulation-abstraction-inheritance-and-polymorphism-26aa98042d41

From the blog CS@Worcester – One pixel at a time by gizmo10203 and used with permission of the author. All other rights reserved by the author.

A Closer Look at Gitpod: A Remote Development Environment

Hi class,

For this blog post I decided to choose the topic of development environments. Development environments are one of the topics that we went over this course and furthermore, it can be interesting to find out more about it on a deeper level.

The source that I have selected is a podcast episode about remote development environments, of which the link is https://www.youtube.com/watch?v=otB0qGGmDFI. Sid Palas, the host, is an avid learner and wishes to share more of everything code to the world. In this episode he interviews Pauline Narvas, head of community at Gitpod, and Chris Weichel, CTO of Gitpod. The episode covers topics from what is a remote development environment to the inner workings of it.

Sid begins the podcast by asking Pauline what Gitpod is. Pauline replies that Gitpod “is an open source developer platform that automates the provisioning of ready to code developer environments.” This simply means that the goal of Gitpod is to remove the “friction” of the developer experiences by making the development environment more collaborative, joyful, and secure, all at the same time. 

Following this section Pauline is questioned about why someone would use a remote development environment such as Gitpod, rather than using their own laptop which has all their packages, layouts, and environments all set up. She replies the whole point of Gitpod is trying to remove the dependency of an environment. Furthermore, she goes on to state that when an update occurs, most of the time the environment does not work/delays the process of coding, while Gitpod works on automating this by saving time and mental stress. Chris goes on to add that it eliminates the “it works on my machine” issue due to the fact it will work on all participating machines because being a remote development environment, everything is in the cloud. Additionally, Chris goes on to add that the environment is more secure due to the fact your work is not secured locally on your laptop, but rather a secure cloud of which they have teams dedicated to keeping your information secure.

Pauline then is asked how to form a prospering community in the development environment of being totally remote. Pauline goes on to state she joined Gitpod in July of 2021. At the time there were multiple outlets of conversations throughout developers by using GitHub, X (formally known as Twitter), Discord, and other chat applications. She realized this was not effective at all for communication, so her role was to try to streamline the community together due to the fact the community was scattered throughout multiple channels. At the time there was not a central place for the community to come together. From here, the Gitpod community was created, which has been found to be a great central hub for developers to come together. Pauline stresses that having an open outlet of discussion amongst peers is crucial to a development environment, remote or not.

Lastly, Sid addresses the security of Gitpod and what it looks like from an inner perspective. Chris states it’s a forever evolving process, but the most important key is creating a team(s) that thrives. Chris goes on to state that you must give the team enough space to act, yes, but also build a team that knows the space, is knowledgeable and drives to keep learning about the forever changing space. 

My personal comments about this is that I found this to be really insightful. Throughout the interview I felt like they were not simply just having a podcast, but talking to me as a viewer. As a fairly new programmer, I have not been exposed to the extent of Gitpod as other programers have, but what they did with Gitpod, I can see why a lot of programmers use it. Gitpod saves a lot of time in a development environment and less stress of hearing “it works on my machine”; it will work on everyone’s. Furthermore, I have been kind of intimidated by Gitpod, but after listening to this I am eager to use it more often when doing coding projects whether that’s by myself or with a team, it seems like a really great tool that I should be more involved in. 

From the blog CS@Worcester – Programming with Santiago by Santiago Donadio and used with permission of the author. All other rights reserved by the author.

Software Maintenance

For my third blog, I will explore software maintenance and its important role in the SDLC process. Maintenance is typically the last step because these updates and tweaks are made after the finished product. We have touched on the SDLC process and scrum during our in-class activities and examined both their differences and similarities. I found this source that goes into more detail about software maintenance while explaining the different types of maintenance.
For the most part, maintenance in our in-class discussions for the SDLC process was a period of bug fixing or adding new features the customer wanted in the software. The blog lists two more reasons why maintenance should be altered, whenever a policy changes or if there is a business-level change, like an acquisition of another company. In our in-class discussion, we mainly went over two types of maintenance, corrective maintenance, which consists of updates correcting problems found by an end user, and adaptive maintenance, which consists of keeping the program up to date. A new type of maintenance I haven’t thought of but the source pointed out was preventive maintenance, updates that aim to prevent future problems of the software. Some examples of preventive maintenance can be regular cleaning of code or replacing outdated sections and updating them with newer code.

The source then goes on to talk about the costs of each software process cycle, and this section caught me by surprise. It goes on by stating a study found that maintenance can be as high as 67% of the cost of the total software process cycle. I always thought that designing or testing would have a bigger slice of the cost rather than maintaining, but the source again highlights that on average the cost of software maintenance is more than 50% of all SDLC phases. It then goes on to give some context or some reasoning why this phase can be so expensive, the standard age of software can be up to 10 to 15 years which creates a commitment to pay to upkeep them, the structure of the program, the language used in the programming, and changes that are made are often undocumented which leads to problems in the future.

This source did a good job of going in-depth with the maintenance stage of the SDLC process we learned in various in-class activities. It gave me a new sense of where most of the budget goes during the SDLC stages while explaining that each type of maintenance is varied by its nature and characteristics. If you would like to know more about the most costly phase during the SDLC then I would recommend reading.

Source: https://www.tutorialspoint.com/software_engineering/software_maintenance_overview.htm

From the blog Mike's Byte-sized by mclark141cbd9e67b5 and used with permission of the author. All other rights reserved by the author.

Adaptable Web Designs

I chose the blog post, “Designing for the Unexpected” by Cathy Dutton because it addresses how one can create designs that can combat unexpected content changes. The blog shows us strategies that we can use so we don’t get stuck in situations like this. On my own time, I have been learning how to create in the web design space so that was one of the main factors when choosing this blog. This is what led me to choose this blog post, so I can learn how to not make mistakes and so I can follow the strategies laid out to design in the most efficient way possible.

In the blog, Dutton explores strategies for creating adaptable web designs that accommodate unforeseen content changes and evolving device landscapes. She reflected on the evolution from fixed-width designs to responsive layouts, emphasizing the necessity of planning for flexibility from the outset. Dutton recounts her early experiences with web design, and  highlights the challenges of transitioning to responsive design, noting that it requires comprehensive planning during the design phase rather than being an afterthought. To implement responsive designs, Dutton initially utilized percentage-based layouts with native CSS and utility classes, later incorporating Sass for reusable code and more semantic markup. Media queries played a crucial role in this process, allowing designs to adapt at specific breakpoints to maintain readability across different screen sizes. However, she observed that this method often necessitated complex markup, posing challenges for content management, especially for users without extensive HTML knowledge. Dutton introduces the concept of intrinsic design, a term coined by Jen Simmons, which leverages new and existing CSS features to create layouts that respond organically to content and available space. This approach employs the ‘fr’ unit to distribute space flexibly without compromising content legibility, enabling designs to adapt dynamically to varying content and container sizes. Intrinsic design moves beyond predefined breakpoints, fostering components that are inherently responsive. The article also discusses the limitations of relying solely on frameworks like Bootstrap for responsive design. Dutton emphasizes the importance of designing for diverse user contexts, acknowledging that users interact with websites across various environments and devices. By adopting flexible design principles and focusing on content adaptability, designers can create resilient and future-proof web experiences that cater to unforeseen changes and diverse user needs. The blog advocates for a shift towards intrinsic design methodologies that prioritize content flexibility and responsiveness. By embracing CSS advancements and moving beyond rigid frameworks, designers can craft web experiences that gracefully adapt to the unpredictable nature of content and device evolution.

From the blog, I was able to learn the best strategies when it comes to designing an adaptable web interface. Going forward, I plan to incorporate these strategies as I continue to learn more about designing web pages. After reading, I feel like I will be able to increase my learning in this area as well as be able to share these strategies with my peers.

https://alistapart.com/article/designing-for-the-unexpected/

From the blog CS@Worcester – Giovanni Casiano – Software Development by Giovanni Casiano and used with permission of the author. All other rights reserved by the author.

REST API usage

I’m very unfamiliar with REST API’s but throughout most of our classes I’ve learned that they are very important especially in moving forward with development as a career. Since I was not very familiar with the topic I wanted to look up the basics of the API’s functionality, this is the site that gave me that information https://www.redhat.com/en/topics/api/what-is-a-rest-api. This site is basically documentation describing an API, REST API and REST all in their own individual categories.

The documentation covers the basics of REST such as the client-server architecture and communication, the representation of the requester and endpoints and the format the data is actually transferred through and the guidelines of the REST functionality as well. This post also covers the topic of what an API has to include to be considered a RESTful API, disregarding the aforementioned topics it covers the specifics of how the resources that are used by the client and modified on the server should react. Cacheable data that is transferred between both the client and server is also a topic brought up which I think is actually the most important since any data lost between actions would result in a broken / useless API.

Since I’m very uninformed on the topic of REST and I haven’t setup any personal development environments to work with such a thing, so I won’t be putting much of this newly gained knowledge to use as of this moment, but learning about it for future reference will definitely help me in the long run. I’ll most likely setup the development environments needed and study up more on the internals and all of the languages that work best for this type of development since understanding server and client architecture does not only represent working with REST. It also represents working with pretty much any application that multiple of your clients will use depending on what they ask for. This also gets more into the request part of the communication which I have never really looked into in depth, so being able to see how it functions will also help my understanding of most other server-client / client-server communication.

Honestly when we first discussed REST in our classes I still didn’t really understand the entire process of what exactly was happening. Reading through this post again though, definitely clarified some of the topics we went over and more, especially more into the back-end side of the API. Going forward I will continue to research more into REST, it seems very interesting to me and again, will be very helpful in understanding more about architecture in bigger programs.

From the blog CS@Worcester – CS Blog by Mike and used with permission of the author. All other rights reserved by the author.